Skip to main content
Glama
aliyun

Alibaba Cloud Observability MCP Server

Official
by aliyun

sls_execute_query

Execute SLS log queries to retrieve and analyze log data within specified time ranges in Alibaba Cloud Log Service projects and log stores.

Instructions

执行SLS日志查询。

        ## 功能概述

        该工具用于在指定的SLS项目和日志库上执行查询语句,并返回查询结果。查询将在指定的时间范围内执行。

        ## 使用场景

        - 当需要根据特定条件查询日志数据时
        - 当需要分析特定时间范围内的日志信息时
        - 当需要检索日志中的特定事件或错误时
        - 当需要统计日志数据的聚合信息时



        ## 查询语法

        查询必须使用SLS有效的查询语法,而非自然语言。如果不了解日志库的结构,可以先使用sls_describe_logstore工具获取索引信息。

        ## 时间范围

        查询必须指定时间范围:
        - from_timestamp: 开始时间戳(秒)
        - to_timestamp: 结束时间戳(秒)

        ## 查询示例

        - "帮我查询下 XXX 的日志信息"
        - "查找最近一小时内的错误日志"

        ## 错误处理
        - Column xxx can not be resolved 如果是 sls_translate_natural_language_to_query 工具生成的查询语句 可能存在查询列未开启统计,可以提示用户增加相对应的信息,或者调用 sls_describe_logstore 工具获取索引信息之后,要用户选择正确的字段或者提示用户对列开启统计。当确定列开启统计之后,可以再次调用sls_translate_natural_language_to_query 工具生成查询语句

        Args:
            ctx: MCP上下文,用于访问SLS客户端
            project: SLS项目名称
            log_store: SLS日志库名称
            query: SLS查询语句
            from_timestamp: 查询开始时间戳(秒)
            to_timestamp: 查询结束时间戳(秒)
            limit: 返回结果的最大数量,范围1-100,默认10
            region_id: 阿里云区域ID

        Returns:
            查询结果列表,每个元素为一条日志记录
        

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
from_timestampYesfrom timestamp,unit is second
limitNolimit,max is 100
log_storeYessls log store name
projectYessls project name
queryYesquery
region_idYesaliyun region id,region id format like 'xx-xxx',like 'cn-hangzhou'
to_timestampYesto timestamp,unit is second
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that queries must use SLS syntax (not natural language), specifies required time range parameters, mentions error handling scenarios, and indicates results are returned as a list of log records. However, it doesn't cover important behavioral aspects like authentication requirements, rate limits, pagination behavior, or what happens with malformed queries beyond the specific error example.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is excessively long with redundant sections. The 'Args' and 'Returns' sections duplicate information that should be in the schema. The query examples are in natural language despite explicitly stating queries must use SLS syntax, creating confusion. The error handling section is overly specific to one sibling tool interaction. Much of this content could be streamlined or moved to structured documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 7-parameter query execution tool with no annotations and no output schema, the description provides adequate functional coverage but lacks important operational context. It explains what the tool does and when to use it, but doesn't sufficiently cover error patterns beyond one example, performance characteristics, or result format details. The description compensates somewhat for the lack of structured metadata but leaves gaps in behavioral transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds minimal value beyond the schema - it mentions time range requirements and provides an example limit value, but doesn't explain parameter interactions, constraints beyond what's in the schema, or the significance of region_id selection. The baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'executes SLS log queries' with specific resources (SLS project and log store) and mentions returning query results. It distinguishes from siblings like sls_describe_logstore and sls_translate_natural_language_to_query by focusing on query execution rather than metadata or translation. However, it doesn't explicitly contrast with sls_diagnose_query which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The '使用场景' section provides clear context for when to use this tool (querying logs with specific conditions, time ranges, events, or aggregations). It explicitly references sibling tools sls_describe_logstore and sls_translate_natural_language_to_query for prerequisite steps. However, it doesn't explicitly state when NOT to use this tool or provide clear alternatives among siblings like sls_diagnose_query.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aliyun/alibabacloud-observability-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server