Skip to main content
Glama
SuxyEE
by SuxyEE

query_logs

Query and analyze log data from Alibaba Cloud SLS logstores to debug issues, investigate errors, and perform log analysis using time ranges and filter queries.

Instructions

Query log data from an SLS logstore with a time range and optional filter query. Returns formatted log entries. Use for debugging, error investigation, and log analysis. Supports SLS query syntax like "level: ERROR", "content: timeout", "status: 500 AND method: POST".

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectYesSLS project name
logstoreYesSLS logstore name
queryNoSLS query statement. Examples: "*" for all logs, "level: ERROR", "content: timeout", "level: ERROR AND status: 500"*
time_rangeNoRelative time range. Formats: 1m, 5m, 15m, 30m, 1h, 2h, 6h, 12h, 1d, 3d, 7d15m
fromNoStart time as Unix timestamp (seconds). Overrides time_range if provided.
toNoEnd time as Unix timestamp (seconds). Used with from parameter.
max_logsNoMaximum number of logs to return (1-500). Default: 50
regionNoAlibaba Cloud region ID, e.g. cn-hangzhou. Defaults to SLS_REGION env variable.

Implementation Reference

  • The handleQueryLogs function implements the core logic for the "query_logs" tool, processing inputs, executing the query, and formatting the results.
    export async function handleQueryLogs(input: QueryLogsInput): Promise<string> {
      let from: number;
      let to: number;
    
      if (input.from && input.to) {
        from = input.from;
        to = input.to;
      } else {
        const range = parseTimeRange(input.time_range);
        from = range.from;
        to = range.to;
      }
    
      const result = await queryLogs({
        project: input.project,
        logstore: input.logstore,
        query: input.query,
        from,
        to,
        maxLogs: input.max_logs,
        region: input.region,
      });
    
      const fromStr = formatTimestamp(from);
      const toStr = formatTimestamp(to);
    
      const header = [
        `## SLS Query Results`,
        `**Project**: ${input.project} / **Logstore**: ${input.logstore}`,
        `**Time**: ${fromStr} → ${toStr}`,
        `**Query**: \`${input.query}\``,
        `**Returned**: ${result.logs.length} logs${result.hasMore ? ` (more available, total count: ${result.count})` : ` / total: ${result.count}`}`,
      ].join('\n');
    
      if (result.logs.length === 0) {
        return `${header}\n\nNo logs found matching the query.`;
      }
    
      const logEntries = result.logs.map((log, i) => formatLogEntry(log, i)).join('\n\n---\n\n');
    
      const footer = result.hasMore
        ? `\n\n> **Note**: Results truncated at ${input.max_logs}. Increase \`max_logs\` or narrow the query/time range.`
        : '';
    
      return `${header}\n\n${logEntries}${footer}`;
    }
  • The Zod schema definition for the input parameters of the "query_logs" tool.
    export const queryLogsSchema = z.object({
      project: z.string().describe('SLS project name'),
      logstore: z.string().describe('SLS logstore name'),
      query: z
        .string()
        .default('*')
        .describe(
          'SLS query statement. Examples: "*" for all logs, "level: ERROR", "content: timeout", "level: ERROR AND status: 500"'
        ),
      time_range: z
        .string()
        .default('15m')
        .describe('Relative time range. Formats: 1m, 5m, 15m, 30m, 1h, 2h, 6h, 12h, 1d, 3d, 7d'),
      from: z
        .number()
        .optional()
        .describe('Start time as Unix timestamp (seconds). Overrides time_range if provided.'),
      to: z
        .number()
        .optional()
        .describe('End time as Unix timestamp (seconds). Used with from parameter.'),
      max_logs: z
        .number()
        .min(1)
        .max(500)
        .default(50)
        .describe('Maximum number of logs to return (1-500). Default: 50'),
      region: z
        .string()
        .optional()
        .describe('Alibaba Cloud region ID, e.g. cn-hangzhou. Defaults to SLS_REGION env variable.'),
    });
  • src/index.ts:90-92 (registration)
    Registration and invocation logic for the "query_logs" tool within the main server file.
    case 'query_logs': {
      const input = queryLogsSchema.parse(args);
      text = await handleQueryLogs(input);
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the core functionality (querying logs with time range and filters) and mentions the return format ('formatted log entries'), but lacks details about authentication requirements, rate limits, pagination behavior, error handling, or whether this is a read-only operation (though implied by 'query').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core functionality, the second provides usage context and syntax examples. Every element serves a purpose with zero wasted words, making it easy to parse while remaining comprehensive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 8 parameters and no output schema, the description provides adequate context about what the tool does and when to use it, but lacks details about the return format (beyond 'formatted log entries'), error conditions, authentication requirements, and behavioral constraints. With no annotations and no output schema, more completeness would be beneficial for this query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds minimal value beyond the schema by mentioning 'time range and optional filter query' and providing SLS syntax examples, but doesn't explain parameter interactions (e.g., time_range vs from/to override) or add significant semantic context not already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('query'), resource ('log data from an SLS logstore'), and scope ('with a time range and optional filter query'). It distinguishes itself from siblings by specifying it returns formatted log entries for debugging/analysis, unlike get_log_histogram (statistical) or query_logs_sql (SQL-based).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('for debugging, error investigation, and log analysis') and mentions SLS query syntax, which helps differentiate it from SQL-based alternatives. However, it doesn't explicitly state when NOT to use it or provide direct comparisons with specific sibling tools like get_context_logs or query_logs_sql.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SuxyEE/aliyun-sls-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server