Skip to main content
Glama
call518

MCP PostgreSQL Operations

get_vacuum_effectiveness_analysis

Analyze PostgreSQL VACUUM effectiveness to identify suboptimal maintenance patterns, compare manual vs autovacuum performance, and calculate maintenance efficiency ratios without performance impact.

Instructions

[Tool Purpose]: Analyze VACUUM effectiveness and maintenance patterns using existing statistics

[Exact Functionality]:

  • Compare manual VACUUM vs autovacuum effectiveness patterns

  • Analyze VACUUM frequency vs table activity (DML operations)

  • Identify tables with suboptimal VACUUM patterns

  • Calculate maintenance efficiency ratios without performance impact

  • Show VACUUM coverage analysis across all tables

[Required Use Cases]:

  • When user requests "VACUUM effectiveness", "maintenance efficiency", "VACUUM analysis", etc.

  • When planning manual VACUUM schedules or autovacuum tuning

  • When identifying tables with poor maintenance patterns

  • When analyzing overall database maintenance health

[Strictly Prohibited Use Cases]:

  • Requests for VACUUM execution or scheduling

  • Requests for autovacuum configuration changes

  • Requests for maintenance operation control

Args: database_name: Target database name (uses default database from POSTGRES_DB env var if omitted) schema_name: Schema to analyze (analyzes all user schemas if omitted) limit: Maximum number of tables to analyze (1-100, default: 30)

Returns: VACUUM effectiveness analysis with maintenance patterns and recommendations

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
database_nameNo
schema_nameNo
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates that this is an analysis tool ('analyze', 'calculate', 'show') rather than an execution tool, and explicitly states it works 'without performance impact'. However, it doesn't mention authentication requirements, rate limits, or what specific statistics it accesses, leaving some behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections ([Tool Purpose], [Exact Functionality], [Required Use Cases], [Strictly Prohibited Use Cases], Args, Returns). Each section is focused and adds value. While somewhat detailed, every sentence serves a purpose in clarifying the tool's scope and usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's analytical nature, 3 parameters, no annotations, but with an output schema (implied by 'Returns' statement), the description provides comprehensive context. It covers purpose, functionality, usage guidelines, prohibitions, parameter semantics, and return value description. The output schema existence means the description doesn't need to detail return structure, making this complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by providing clear parameter explanations in the Args section. It explains that database_name uses a default from environment variable if omitted, schema_name analyzes all user schemas if omitted, and limit has a range and default. This adds meaningful context beyond the basic schema, though it doesn't explain the format or constraints of database/schema names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as analyzing VACUUM effectiveness and maintenance patterns using existing statistics. It specifies the exact functionality including comparing manual vs autovacuum, analyzing frequency vs activity, identifying suboptimal patterns, calculating efficiency ratios, and showing coverage analysis. This distinguishes it from sibling tools like get_autovacuum_activity or get_vacuum_analyze_stats which focus on different aspects of vacuum operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit 'Required Use Cases' (e.g., when user requests VACUUM effectiveness, planning schedules, identifying poor patterns) and 'Strictly Prohibited Use Cases' (e.g., requests for VACUUM execution, autovacuum configuration changes, maintenance operation control). This gives clear guidance on when to use this tool versus alternatives that might handle execution or configuration.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/call518/MCP-PostgreSQL-Ops'

If you have feedback or need assistance with the MCP directory API, please join our Discord server