Postgres MCP Pro
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.3.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 9 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior1/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full responsibility for disclosing safety traits. It completely omits whether the tool can modify data, required permissions, transaction behavior, or result format—critical omissions for arbitrary SQL execution.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness2/5Is the description appropriately sized, front-loaded, and free of redundancy?
While brief (3 words), it is under-specified rather than efficiently concise. The brevity masks critical missing information (safety warnings, scope) rather than eliminating waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness1/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Completely inadequate for a high-risk arbitrary execution tool. Missing: output format (result sets vs row count), destructive operation warnings, DDL vs DML capabilities, and error handling expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single `sql` parameter ('SQL to run'). The description adds no validation rules, syntax examples, or clarification of the unusual default value 'all', meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
States the core action (Execute) and resource (SQL query) but 'any' is dangerously unscoped and fails to distinguish from analytical siblings like `explain_query` or `analyze_query_indexes`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines1/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus `explain_query` or other analysis tools, nor warnings about using read-only vs write queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet the description discloses no behavioral traits beyond the obvious read-only implication of 'Show.' Fails to specify what 'detailed information' includes (structure, metadata, statistics), whether the operation is safe, idempotent, or any error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero redundancy or filler. However, given the lack of annotations and output schema, the description is arguably underweight rather than efficiently concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a tool with no annotations and no output schema. The description fails to hint at return value structure, complexity level (e.g., 'includes column definitions and constraints'), or how it complements other database introspection tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema adequately documents all three parameters including the default value and valid options for object_type. The description adds no parameter-specific context, qualifying for the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
States the basic action (Show) and resource (database object) but is generic. 'Detailed information' is vague, and the description fails to differentiate from sibling tool list_objects (which returns multiple items vs. this single-object focus).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like list_objects or list_schemas. No mention of prerequisites (e.g., knowing schema_name from list_schemas first) or intended workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fails to disclose whether this operation is read-only or modifies system state, whether recommendations are persisted, execution duration expectations, or required permissions. 'Analyze' implies read-only but lacks explicit confirmation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently delivers core purpose without redundancy. Front-loaded with action verb. However, extreme brevity contributes to gaps in behavioral transparency and usage guidelines.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers basic intent but leaves significant gaps given the presence of semantically similar siblings (analyze_query_indexes, analyze_db_health) and lack of annotations. Should clarify scope boundaries and safety profile for a database analysis tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage ('Max index size in MB', 'Method to use for analysis'), so the schema documents parameters adequately. The description adds no parameter-specific guidance, earning the baseline score for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Analyze) and target resource (frequently executed queries) with clear outcome (recommend optimal indexes). The 'frequently executed' qualifier helps distinguish from sibling analyze_query_indexes, though it doesn't explicitly clarify the workload-level scope versus single-query analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus close sibling analyze_query_indexes or when to prefer 'dta' versus 'llm' method. No prerequisites, exclusions, or selection criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility for behavioral disclosure. It fails to warn that the tool can actually execute queries when analyze=True (potentially destructive for INSERT/UPDATE/DELETE statements) or describe the performance implications of running queries. It also omits the hypothetical index simulation capability entirely.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence that efficiently states the tool's purpose without redundancy. However, given the complexity of the hypothetical_indexes parameter and the analyze safety implications, the brevity may be excessive under-specification rather than optimal conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a three-parameter tool with complex nested objects (hypothetical index definitions) and no output schema. The description omits the tool's index simulation capabilities, fails to describe return value structure, and lacks safety warnings necessary for a tool capable of executing SQL when analyze=True.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description mentions 'cost estimates' which contextually relates to the analyze parameter's purpose, but adds no syntax guidance, format details, or semantic clarifications beyond what the detailed schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the core function (explains execution plans) and outputs (cost estimates). Uses specific verbs and identifies the resource (SQL query). However, it does not explicitly differentiate from sibling analysis tools like analyze_query_indexes or analyze_workload_indexes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives. Does not mention whether to use this before execute_sql, when debugging slow queries, or how it relates to the index analysis siblings. No prerequisites or conditions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but reveals nothing about read-only safety, return value structure, pagination behavior, or error handling (e.g., invalid schema names). The description does not clarify what constitutes an 'object' in this context beyond the schema's default of 'table'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at four words with no redundancy. However, given the complete absence of annotations and output schema, the description may be overly terse—trading necessary behavioral context for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 2-parameter listing tool with complete schema coverage, but lacks contextual safeguards given no annotations. Missing differentiation from similar sibling tools and behavioral expectations given the 'analyze_' and 'execute_' siblings suggest this is a database introspection tool where safety guidance would be valuable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, with 'object_type' already enumerating valid values ('table', 'view', 'sequence', 'extension'). The description adds no semantic clarification beyond the schema, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('List') with specific resource ('objects') and scope ('in a schema'), making the basic purpose understandable. However, it fails to differentiate from sibling tools like 'list_schemas' (which lists schemas rather than objects within them) or 'get_object_details' (which retrieves specific object metadata).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'get_object_details' for single-object lookups or 'list_schemas' for schema enumeration. No mention of prerequisite steps (e.g., verifying schema exists) or when to prefer filtering by specific object types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It implies a read-only operation via 'List', but does not disclose safety guarantees, permissions required, pagination behavior, or what the return format contains.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exactly six words with no redundancy. It is front-loaded with the action and object, making it extremely efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the zero-parameter simplicity, the description is minimally adequate. However, with no output schema and no annotations, it could benefit from clarifying what constitutes a 'schema' (database namespace vs object) and how results are structured.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains 0 parameters with 100% coverage. Per scoring rules, 0 parameters establishes a baseline of 4. The description does not need to compensate for missing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('List') and resource ('schemas') and specifies scope ('in the database'). However, it does not distinguish from sibling tool 'list_objects', which could confuse the agent about when to use schemas vs objects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'list_objects' or 'get_object_details'. It states only what the tool does, not when to invoke it or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It excellently discloses what each check evaluates (e.g., 'checks for invalid, duplicate, and bloated indexes', 'checks vacuum health for transaction id wraparound'). However, it doesn't explicitly state the tool is read-only/safe or describe the return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with an initial summary, detailed bullet list for check types, and parameter guidance. The length is appropriate for the complexity of 8 distinct check types. No redundant sentences, though the comma-separated note could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage of the functional domain (what health checks are performed). Without annotations or output schema, it successfully explains the operational scope, though it could benefit from a note about the output format or execution impact.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds significant semantic value by explaining what each enum value represents (e.g., what 'sequence' or 'buffer' checks mean in detail) and notes that comma-separated lists are accepted—context not explicit in the schema's type definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Analyzes' and resource 'database health'. The bulleted list of specific infrastructure checks (vacuum, replication, sequence, etc.) implicitly distinguishes this from sibling 'analyze_query_indexes' which focuses on query optimization. However, it doesn't explicitly contrast with siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides usage guidance for the parameter ('optionally specify a single health check or a comma-separated list'), but fails to specify when to use this tool versus alternatives like 'analyze_query_indexes' or 'execute_sql'. No prerequisites or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry full behavioral disclosure. It successfully discloses the input cardinality limit (10 queries), but fails to describe the output format, computational cost implications, or whether this is a safe read-only operation versus potentially expensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly concise single sentence with zero waste. The parenthetical '(up to 10)' efficiently packs a critical constraint without verbosity. Information density is high and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with no output schema, the description is adequate but incomplete. It omits what the analysis returns (DDL recommendations? Score ratings? Impact estimates?) and doesn't explain the behavioral difference between 'dta' and 'llm' methods defined in the enum.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds valuable semantic context that '(up to 10)' queries are accepted, which is cardinality information not present in the schema's 'List of Query strings' description. This meaningfully constrains user expectations beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Analyze') and resource ('SQL queries'/'indexes') and includes the critical constraint '(up to 10)' which implicitly distinguishes this from the sibling analyze_workload_indexes. However, it doesn't explicitly reference sibling alternatives to make the distinction crystal clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'up to 10' limit implies this is for targeted analysis rather than bulk workload assessment, suggesting when to prefer analyze_workload_indexes instead. However, it lacks explicit 'when to use/when not to use' guidance or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so full burden on description. Discloses data source (pg_stat_statements extension) which implies requirements. Missing: safety profile (read-only vs destructive), performance cost of running this report, or behavior when extension is unavailable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single well-formed sentence, front-loaded with action. Zero waste. Appropriate length for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter reporting tool with 100% schema coverage. Mentions data source and ranking criteria. Lacking output schema, could benefit from noting if results include query text, call counts, or just identifiers, but acceptable scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete param descriptions. Description adds crucial domain context that these are PostgreSQL queries from pg_stat_statements, which helps interpret the 'total_time', 'mean_time', and 'resources' sort options. Elevates above baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Reports' with specific resource 'queries' and data source 'pg_stat_statements'. Identifies the scope as slowest/resource-intensive queries. Lacks explicit differentiation from siblings like 'explain_query' (which analyzes specific queries) or 'analyze_query_indexes'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies prerequisite by mentioning dependency on 'pg_stat_statements' extension. However, lacks explicit when-to-use guidance versus siblings ('explain_query' for specific query analysis vs this for top-N discovery) or when-not-to-use warnings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/moecodeshere/mcptrial'
If you have feedback or need assistance with the MCP directory API, please join our Discord server