Skip to main content
Glama
BACH-AI-Tools

Postgres MCP Pro

explain_query

Analyze SQL query execution plans to understand database performance, optimize queries, and simulate index changes for PostgreSQL databases.

Instructions

Explains the execution plan for a SQL query, showing how the database will execute it and provides detailed cost estimates.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sqlYesSQL query to explain
analyzeNoWhen True, actually runs the query to show real execution statistics instead of estimates. Takes longer but provides more accurate information.
hypothetical_indexesNoA list of hypothetical indexes to simulate. Each index must be a dictionary with these keys: - 'table': The table name to add the index to (e.g., 'users') - 'columns': List of column names to include in the index (e.g., ['email'] or ['last_name', 'first_name']) - 'using': Optional index method (default: 'btree', other options include 'hash', 'gist', etc.) Examples: [ {"table": "users", "columns": ["email"], "using": "btree"}, {"table": "orders", "columns": ["user_id", "created_at"]} ] If there is no hypothetical index, you can pass an empty list.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that the tool 'shows how the database will execute' and provides 'detailed cost estimates', which gives some behavioral context. However, it doesn't disclose important traits like whether this is a read-only operation, potential performance impact (especially with analyze=true), rate limits, or authentication requirements. The description adds basic context but misses key behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise at two sentences with zero wasted words. The first sentence states the core purpose, and the second sentence adds important behavioral context about what the explanation includes. Every sentence earns its place by providing distinct value, and the information is front-loaded with the most important purpose statement first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, no annotations, and no output schema, the description provides adequate but incomplete context. It clearly states what the tool does but doesn't address important contextual aspects like what format the explanation returns, whether it's safe to run on production databases, or how it differs from similar tools. The description is complete enough for basic understanding but leaves gaps for practical implementation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema descriptions. It mentions 'execution plan' and 'cost estimates' which relate to the output rather than input parameters. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('explains', 'showing', 'provides') and resources ('execution plan for a SQL query', 'detailed cost estimates'). It distinguishes itself from siblings like execute_sql (which runs queries) and analyze_query_indexes (which focuses on indexes) by emphasizing explanation and planning rather than execution or optimization analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (understanding query performance before execution) but doesn't explicitly state when to use this tool versus alternatives. For example, it doesn't clarify when to choose explain_query over analyze_query_indexes for index analysis or when to use it alongside execute_sql. The guidance is present but not explicit about alternatives or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/BACH-AI-Tools/bach--postgres-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server