Skip to main content
Glama
SuxyEE
by SuxyEE

query_logs_sql

Execute SQL queries on Alibaba Cloud SLS logs to perform analysis, aggregation, and statistical calculations for monitoring and troubleshooting purposes.

Instructions

Execute a SQL query against an SLS project for log analysis and aggregation. Best for counting, grouping, statistical analysis. Example: "SELECT status, count(*) as cnt FROM WHERE time > 1700000000 GROUP BY status".

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectYesSLS project name
queryYesSQL query with mandatory time range. Must include FROM clause with logstore and time filter using __date__ or __time__. Example: "SELECT status, count(*) as cnt FROM <logstore> WHERE __date__ > '2024-01-01 00:00:00' GROUP BY status ORDER BY cnt DESC"
time_rangeNoRelative time range used to fill __time__ filter. Formats: 15m, 1h, 6h, 1d1h
fromNoStart time as Unix timestamp (seconds). Overrides time_range.
toNoEnd time as Unix timestamp (seconds).
regionNoAlibaba Cloud region ID, e.g. cn-hangzhou. Defaults to SLS_REGION env variable.

Implementation Reference

  • The handler function `handleQueryLogsSQL` that executes the SQL query using `queryLogsBySQL`.
    export async function handleQueryLogsSQL(input: QueryLogsSQLInput): Promise<string> {
      let from: number;
      let to: number;
    
      if (input.from && input.to) {
        from = input.from;
        to = input.to;
      } else {
        const range = parseTimeRange(input.time_range);
        from = range.from;
        to = range.to;
      }
    
      const result = await queryLogsBySQL({
        project: input.project,
        query: input.query,
        from,
        to,
        region: input.region,
      });
    
      const fromStr = formatTimestamp(from);
      const toStr = formatTimestamp(to);
    
      const header = [
        `## SLS SQL Query Results`,
        `**Project**: ${input.project}`,
        `**Time**: ${fromStr} → ${toStr}`,
        `**Query**: \`${input.query}\``,
        `**Rows**: ${result.logs.length}${result.processedRows ? ` (processed ${result.processedRows} rows)` : ''}`,
      ].join('\n');
    
      if (result.logs.length === 0) {
        return `${header}\n\nNo results returned.`;
      }
    
      const rows = result.logs.map((row, i) => `[${i + 1}] ${formatRow(row)}`).join('\n');
    
      return `${header}\n\n${rows}`;
    }
  • Input validation schema `queryLogsSQLSchema` for `query_logs_sql`.
    export const queryLogsSQLSchema = z.object({
      project: z.string().describe('SLS project name'),
      query: z
        .string()
        .describe(
          'SQL query with mandatory time range. Must include FROM clause with logstore and time filter using __date__ or __time__. Example: "SELECT status, count(*) as cnt FROM <logstore> WHERE __date__ > \'2024-01-01 00:00:00\' GROUP BY status ORDER BY cnt DESC"'
        ),
      time_range: z
        .string()
        .default('1h')
        .describe('Relative time range used to fill __time__ filter. Formats: 15m, 1h, 6h, 1d'),
      from: z.number().optional().describe('Start time as Unix timestamp (seconds). Overrides time_range.'),
      to: z.number().optional().describe('End time as Unix timestamp (seconds).'),
      region: z
        .string()
        .optional()
        .describe('Alibaba Cloud region ID, e.g. cn-hangzhou. Defaults to SLS_REGION env variable.'),
    });
  • src/index.ts:38-42 (registration)
    Registration of the `query_logs_sql` tool definition in the server.
      name: 'query_logs_sql',
      description:
        'Execute a SQL query against an SLS project for log analysis and aggregation. Best for counting, grouping, statistical analysis. Example: "SELECT status, count(*) as cnt FROM <logstore> WHERE __time__ > 1700000000 GROUP BY status".',
      inputSchema: zodToJsonSchema(queryLogsSQLSchema) as Tool['inputSchema'],
    },
  • Request handler branch for `query_logs_sql` in `src/index.ts`.
    case 'query_logs_sql': {
      const input = queryLogsSQLSchema.parse(args);
      text = await handleQueryLogsSQL(input);
      break;
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool's analytical purpose and provides an example query, but doesn't cover important behavioral aspects like authentication requirements, rate limits, error handling, or what happens when queries fail. The description adds some value but leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly sized with two sentences: the first states purpose and optimal use case, the second provides a concrete example. Every element earns its place, and the information is front-loaded with the core functionality stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex SQL execution tool with 6 parameters and no output schema, the description provides adequate purpose and usage context but lacks important behavioral details. Without annotations covering safety, authentication, or limits, and without an output schema, the description should do more to prepare the agent for proper tool invocation and result interpretation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema descriptions. It mentions SQL queries generally but doesn't provide additional parameter semantics, earning the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Execute a SQL query'), target resource ('against an SLS project'), and purpose ('for log analysis and aggregation'). It distinguishes from sibling tools like 'get_context_logs' or 'get_log_histogram' by specifying SQL-based analysis capabilities. The example further clarifies the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('Best for counting, grouping, statistical analysis'), but doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools. It implies usage for complex analytical queries versus simpler retrieval operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SuxyEE/aliyun-sls-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server