Skip to main content
Glama
aliyun

Hologres MCP Server

Official
by aliyun

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Latest release: v1.0.0

  • Disambiguation4/5

    Most tools have distinct purposes, such as executing different SQL types or listing schemas/tables, but there is some overlap between get_hg_execution_plan and get_hg_query_plan, which could cause confusion as both relate to query plans. The descriptions clarify the difference (runtime statistics vs. query plan), but the similarity in naming and function might lead to misselection in some contexts.

    Naming Consistency5/5

    All tool names follow a consistent verb_noun pattern with the prefix 'hg_' for Hologres, such as execute_hg_ddl_sql and list_hg_schemas. The naming is uniform across all tools, using snake_case throughout, which makes the set predictable and easy to understand.

    Tool Count5/5

    With 12 tools, the server is well-scoped for a database management system, covering key operations like SQL execution, procedure calls, table management, and query analysis. Each tool serves a specific function without redundancy, making the count appropriate for the domain.

    Completeness4/5

    The toolset provides comprehensive coverage for Hologres database operations, including CRUD-like SQL execution, schema/table listing, and performance analysis. A minor gap is the lack of tools for user or permission management, but core workflows for data querying and administration are well-covered, allowing agents to handle most tasks effectively.

  • Average 3.3/5 across 12 of 12 tools scored.

    See the Tool Scores section below for per-tool breakdowns.

    • 1 of 1 issues responded to in the last 6 months
    • No commit activity data available
    • No stable releases found
    • No critical vulnerability alerts
    • No high-severity vulnerability alerts
    • No code scanning findings
    • CI is passing
  • This repository is licensed under Apache 2.0.

  • This repository includes a README.md file.

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • If you are the author, simply .

    If the server belongs to an organization, first add glama.json to the root of your repository:

    {
      "$schema": "https://glama.ai/mcp/schemas/server.json",
      "maintainers": [
        "your-github-username"
      ]
    }

    Then . Browse examples.

  • Add related servers to improve discoverability.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It states the tool calls a stored procedure but doesn't disclose behavioral traits like whether it's read-only or destructive, what permissions are required, how errors are handled, or what the typical response format is. For a database operation tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, clear sentence that efficiently conveys the core purpose without any wasted words. It's appropriately sized and front-loaded, making it easy to understand at a glance.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity of calling stored procedures in a database, the lack of annotations, and no output schema, the description is incomplete. It doesn't address what the tool returns, how to handle results, error conditions, or security implications. For a tool that could perform various database operations, more context is needed to use it effectively.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already fully documents both parameters (procedure_name and arguments). The description adds no additional meaning beyond what's in the schema, such as examples of procedure names, argument formatting, or special considerations. The baseline of 3 is appropriate when the schema does all the work.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Call') and target resource ('a stored procedure in Hologres database'), making the purpose immediately understandable. However, it doesn't differentiate this tool from its siblings like 'execute_hg_ddl_sql' or 'execute_hg_dml_sql', which might also interact with stored procedures or similar database operations.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. Given the sibling tools include various SQL execution and database operation tools, there's no indication of whether this is for specific stored procedure calls, how it differs from general SQL execution, or any prerequisites for its use.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool executes DML SQL, implying data mutation, but lacks critical details such as required permissions, whether changes are reversible, potential side effects, or error handling. This is a significant gap for a mutation tool with zero annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that front-loads key information ('Execute (INSERT, UPDATE, DELETE) SQL'). It avoids redundancy, though it could be slightly more structured by separating purpose from context. Overall, it earns its place without waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity of a DML execution tool with no annotations and no output schema, the description is incomplete. It fails to address behavioral aspects like safety, permissions, or response format, leaving the agent with insufficient context to use the tool effectively beyond basic parameter input.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, with the 'query' parameter documented as 'The DML SQL query to execute in Hologres database'. The description adds no additional meaning beyond this, such as syntax examples or constraints, so it meets the baseline of 3 where the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Execute') and resource ('SQL to insert, update, and delete data in Hologres database'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like execute_hg_ddl_sql or execute_hg_select_sql, which handle other SQL types, so it falls short of a perfect score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage for DML operations (INSERT, UPDATE, DELETE) but provides no explicit guidance on when to use this tool versus alternatives like execute_hg_ddl_sql for DDL or execute_hg_select_sql for queries. There are no exclusions or prerequisites mentioned, leaving the agent to infer context from tool names alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool executes SELECT SQL queries, implying read-only operations, but does not cover critical aspects such as authentication requirements, rate limits, error handling, or output format. For a database query tool with zero annotation coverage, this is a significant gap in transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence: 'Execute SELECT SQL to query data from Hologres database.' It is front-loaded with the core purpose, has no redundant information, and every word contributes to understanding the tool's function. This makes it highly concise and well-structured for quick comprehension.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity of database querying, lack of annotations, and no output schema, the description is incomplete. It does not explain what the tool returns (e.g., result sets, error messages), performance implications, or security considerations. For a tool that interacts with a database, more context is needed to ensure safe and effective use by an AI agent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, with the 'query' parameter documented as 'The (SELECT) SQL query to execute in Hologres database.' The description adds no additional parameter semantics beyond this, such as query syntax examples or constraints. Given the high schema coverage, the baseline score of 3 is appropriate, as the schema adequately handles parameter documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose: 'Execute SELECT SQL to query data from Hologres database.' It specifies the verb ('Execute SELECT SQL'), resource ('Hologres database'), and action ('query data'), which is precise. However, it does not explicitly distinguish this tool from its sibling 'execute_hg_select_sql_with_serverless', which likely serves a similar purpose but with different execution context, so it misses full sibling differentiation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, constraints, or comparisons to sibling tools like 'execute_hg_select_sql_with_serverless' or 'execute_hg_dml_sql', leaving the agent without context for selection. This lack of usage instructions reduces its effectiveness in guiding tool invocation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the command execution and purpose but lacks critical details: it doesn't specify if this is a read-only or destructive operation (ANALYZE TABLE can be resource-intensive), required permissions, execution time, or impact on database performance. This leaves significant gaps for safe and effective use.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that front-loads the key action and purpose without unnecessary details. Every word contributes to understanding the tool's function, making it appropriately sized and well-structured for quick comprehension.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity of a database statistics tool with no annotations and no output schema, the description is incomplete. It explains what the tool does but omits behavioral aspects (e.g., execution characteristics, side effects) and output details. For a tool that likely affects query performance, more context on usage and results is needed for effective agent operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with both parameters (schema and table) well-documented in the input schema. The description adds no additional parameter semantics beyond implying these are used for the ANALYZE TABLE command. This meets the baseline score of 3 when schema coverage is high, as the schema adequately explains the parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Execute the ANALYZE TABLE command') and the resource ('Hologres collect table statistics'), with a specific purpose ('enabling QO to generate better query plans'). It distinguishes from siblings like list_hg_tables_in_a_schema or show_hg_table_ddl by focusing on statistics collection rather than listing or showing DDL. However, it doesn't explicitly differentiate from execute_hg_ddl_sql, which could potentially run similar commands.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., after data changes), exclusions (e.g., not for real-time tables), or compare to siblings like execute_hg_ddl_sql for similar SQL execution. Usage is implied through the purpose but lacks explicit context for selection.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'actual execution plan with runtime statistics', implying it may execute the query to gather runtime data, but does not specify if this is read-only, has side effects, requires permissions, or details on rate limits or output format. This leaves significant gaps in understanding the tool's behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and appropriately sized, making it easy to parse and understand quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity of SQL execution analysis and the lack of annotations and output schema, the description is incomplete. It does not explain what the output includes (e.g., plan details, statistics format), potential impacts (e.g., if query execution occurs), or how it differs from similar tools, making it inadequate for full contextual understanding.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, with the parameter 'query' clearly documented. The description adds no additional semantic details beyond what the schema provides, such as query format constraints or examples. Thus, it meets the baseline score of 3, as the schema adequately covers parameter information.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('Get') and resource ('actual execution plan with runtime statistics for a SQL query in Hologres database'), making the purpose specific and understandable. However, it does not explicitly differentiate from its sibling 'get_hg_query_plan', which might be a similar tool, leaving some ambiguity in distinguishing between them.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives, such as 'get_hg_query_plan' or other execution tools like 'execute_hg_select_sql'. There are no explicit instructions on prerequisites, context, or exclusions, leaving usage decisions unclear for an AI agent.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'Get[s] query plan', implying a read-only operation, but doesn't clarify if it requires specific permissions, whether it's safe for production use, what the output format is, or any rate limits. This leaves significant gaps for an agent to understand how to invoke it effectively.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, clear sentence with no wasted words, making it highly concise and front-loaded. It efficiently communicates the core purpose without unnecessary elaboration.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity of database query analysis tools and the lack of annotations and output schema, the description is incomplete. It doesn't explain what a 'query plan' entails, how the result is structured, or any behavioral traits like error handling, making it inadequate for an agent to use confidently without additional context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, with the parameter 'query' documented as 'The SQL query to analyze in Hologres database'. The description adds no additional semantic details beyond this, such as query syntax requirements or examples, so it meets the baseline for high schema coverage without extra value.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb 'Get' and the resource 'query plan for a SQL query in Hologres database', making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_hg_execution_plan', which might cause confusion about the distinction between a query plan and an execution plan.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With siblings like 'execute_hg_select_sql' and 'get_hg_execution_plan', there's no indication of whether this is for analysis, debugging, or optimization, or any prerequisites for usage.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool shows DDL scripts but doesn't describe what the output looks like (e.g., SQL text format), whether it's read-only, if it requires specific permissions, or any side effects. This leaves significant gaps in understanding the tool's behavior beyond its basic purpose.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without any unnecessary words or fluff. It is front-loaded and appropriately sized for a simple tool, earning a high score for conciseness and structure.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is incomplete. It lacks details on output format, behavioral traits, and usage context, which are essential for an agent to effectively invoke this tool. Without annotations or output schema, the description should provide more comprehensive guidance.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, clearly documenting both required parameters ('schema' and 'table') with their meanings. The description adds no additional parameter details beyond what the schema provides, so it meets the baseline score of 3 for adequate but not enhanced parameter semantics.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Show DDL script') and target resource ('table, view, or foreign table in Hologres database'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from siblings like 'execute_hg_ddl_sql' or 'get_hg_query_plan', which might also involve DDL or metadata operations, so it falls short of a perfect score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or compare it to sibling tools such as 'list_hg_tables_in_a_schema' for discovery or 'execute_hg_ddl_sql' for execution, leaving the agent without explicit usage direction.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but offers minimal behavioral disclosure. It mentions the outcome ('accelerate queries') but doesn't cover critical aspects like required permissions, whether this creates a persistent or temporary resource, error conditions, or performance characteristics. For a creation tool with zero annotation coverage, this leaves significant gaps.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that immediately conveys the core purpose without unnecessary words. It's front-loaded with the main action and benefit, making it easy for an agent to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a tool with 4 parameters, 100% schema coverage, but no annotations or output schema, the description provides basic purpose but lacks completeness. It doesn't address mutation implications, return values, or error handling. The agent understands what the tool does but not how it behaves or what to expect from its execution.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, providing complete parameter documentation. The description adds no additional parameter semantics beyond what's in the schema. It doesn't explain relationships between parameters or provide examples. The baseline score of 3 reflects adequate but not enhanced parameter understanding.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Create a MaxCompute foreign table'), the resource ('in Hologres database'), and the purpose ('to accelerate queries on MaxCompute data'). It distinguishes itself from sibling tools like 'execute_hg_ddl_sql' by focusing on a specialized operation rather than general SQL execution.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, performance implications, or compare it to similar tools like 'execute_hg_ddl_sql' which might also create tables. The agent must infer usage from the purpose alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It describes the action and output format (table types), but does not disclose behavioral traits such as permissions required, rate limits, pagination, error handling, or whether it's read-only. For a tool with no annotations, this leaves significant gaps in understanding its behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that front-loads the core purpose ('List all tables in a specific schema') and adds necessary detail ('including their types'). There is no wasted verbiage, and it directly communicates the tool's function.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and output format, but lacks details on behavioral aspects like permissions or error handling. Without annotations or output schema, more context on what the tool returns would be helpful for completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with the single parameter 'schema' well-described in the schema. The description adds no additional parameter semantics beyond implying the schema is for listing tables, which aligns with the schema's description. Baseline 3 is appropriate as the schema handles the parameter documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('List all tables') and resource ('in a specific schema in the current Hologres database'), specifying the scope and what information is included ('including their types'). It distinguishes from siblings like 'list_hg_schemas' (which lists schemas, not tables) and 'show_hg_table_ddl' (which shows DDL for a specific table).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage by specifying 'in a specific schema' and 'current Hologres database', but does not explicitly state when to use this tool versus alternatives like executing SQL queries directly. It lacks explicit exclusions or named alternatives, though the context suggests it's for listing tables rather than other operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. While it indicates this is for executing DDL statements (implying schema modifications), it lacks critical details such as required permissions, whether operations are reversible, potential side effects on dependent objects, or error handling. For a mutation tool with zero annotation coverage, this is a significant gap.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that front-loads the key information ('Execute (CREATE, ALTER, DROP) SQL statements') and specifies the resource scope. There is no wasted verbiage, and every word contributes to understanding the tool's purpose and usage context.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity of a DDL execution tool with no annotations and no output schema, the description is insufficient. It lacks details on behavioral traits (e.g., permissions, side effects), return values, or error handling. While it covers the basic purpose, it does not provide enough context for safe and effective use in a production environment.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with the single parameter 'query' documented as 'The DDL SQL query to execute in Hologres database'. The description adds minimal value beyond the schema by specifying the types of SQL statements (CREATE, ALTER, DROP) but does not provide additional syntax, format, or constraints. Baseline 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Execute SQL statements') and the specific resource types affected ('tables, views, procedures, GUCs etc. in Hologres database'), with explicit verb+resource pairing. It distinguishes from siblings like execute_hg_dml_sql or execute_hg_select_sql by specifying DDL operations (CREATE, ALTER, DROP) rather than DML or SELECT queries.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear context for when to use this tool: for executing DDL SQL statements (CREATE, ALTER, DROP) in Hologres. It implies alternatives by specifying DDL operations, distinguishing it from siblings like execute_hg_dml_sql or execute_hg_select_sql. However, it does not explicitly state when NOT to use it or name specific alternatives, keeping it at a 4.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses the exclusion behavior ('excluding system schemas'), which is valuable context beyond basic listing. However, it doesn't mention pagination, rate limits, authentication requirements, or return format details that would be helpful for a read operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that front-loads the core purpose ('List all schemas') and immediately adds qualifying information ('in the current Hologres database, excluding system schemas'). Every word serves a clear purpose with zero redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a zero-parameter read tool with no annotations and no output schema, the description provides adequate basic context about what it does and what it excludes. However, it lacks details about return format, potential limitations, or how results are structured, which would be valuable for an agent invoking this tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The tool has zero parameters, and schema description coverage is 100% (empty schema). The description appropriately doesn't discuss parameters since none exist, maintaining focus on the tool's purpose. This meets the baseline expectation for parameterless tools.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('List all schemas') and resource ('in the current Hologres database'), with explicit scope clarification ('excluding system schemas'). It distinguishes from sibling tools like 'list_hg_tables_in_a_schema' by focusing on schemas rather than tables.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage context ('current Hologres database') and provides exclusion guidance ('excluding system schemas'), but doesn't explicitly state when to use this tool versus alternatives like executing SQL queries directly. No misleading guidance is present.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains the tool's purpose (executing SELECT queries with serverless resources) and its specific use case for handling memory limitation errors, which is valuable context. However, it doesn't mention other behavioral aspects like performance characteristics, authentication needs, or error handling beyond the memory issue.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is highly concise and well-structured in two sentences: the first states the core purpose, and the second provides specific usage guidance. Every word earns its place with no redundancy or fluff, making it easy to parse and understand quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity (executing SQL queries with serverless resources), no annotations, and no output schema, the description does a good job covering the essential context: purpose, specific use case, and differentiation from siblings. However, it lacks details on return values, error handling beyond memory limits, or performance implications, leaving some gaps for a tool that interacts with a database.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents the single 'query' parameter thoroughly. The description adds minimal value beyond what's in the schema by mentioning it's for 'SELECT SQL' execution, but doesn't provide additional syntax, format, or constraint details. This meets the baseline for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('execute SELECT SQL to query data') and resource ('Hologres database') using 'Serverless Computing resources'. It explicitly distinguishes from sibling tool 'execute_hg_select_sql' by mentioning it as an alternative for memory limitation errors, making the purpose specific and differentiated.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit guidance on when to use this tool versus alternatives: 'When the error like "Total memory used by all existing queries exceeded memory limitation" occurs during execute_hg_select_sql execution, you can re-execute the SQL with this tool.' This clearly defines the specific scenario and names the alternative tool, offering complete usage context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

alibabacloud-hologres-mcp-server MCP server

Copy to your README.md:

Score Badge

alibabacloud-hologres-mcp-server MCP server

Copy to your README.md:

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aliyun/alibabacloud-hologres-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server