Skip to main content
Glama
HenkDz

PostgreSQL MCP Server

pg_monitor_database

Monitor PostgreSQL database performance in real-time by tracking connections, queries, locks, replication status, and setting custom alert thresholds for proactive management.

Instructions

Get real-time monitoring information for a PostgreSQL database

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
connectionStringNo
includeTablesNo
includeQueriesNo
includeLocksNo
includeReplicationNo
alertThresholdsNoAlert thresholds

Implementation Reference

  • The tool handler definition. This is the entry point for the 'pg_monitor_database' tool execution. It validates input using the schema, calls the core executeMonitorDatabase function, formats the output as MCP ToolOutput, and handles errors.
    export const monitorDatabaseTool: PostgresTool = {
      name: 'pg_monitor_database',
      description: 'Get real-time monitoring information for a PostgreSQL database',
      inputSchema: MonitorDatabaseInputSchema,
      async execute(params: unknown, getConnectionString: GetConnectionStringFn): Promise<ToolOutput> {
        const validationResult = MonitorDatabaseInputSchema.safeParse(params);
        if (!validationResult.success) {
          return { content: [{ type: 'text', text: `Invalid input: ${validationResult.error.format()}` }], isError: true };
        }
        try {
          const result = await executeMonitorDatabase(validationResult.data, getConnectionString);
          return { 
            content: [
              { type: 'text', text: `Database monitoring results at ${result.timestamp}` },
              { type: 'text', text: `Alerts: ${result.alerts.length > 0 ? result.alerts.map(a => `${a.level.toUpperCase()}: ${a.message}`).join('; ') : 'None'}` },
              { type: 'text', text: `Full metrics (JSON): ${JSON.stringify(result.metrics, null, 2)}` }
            ]
          };
        } catch (error) {
          const errorMessage = error instanceof McpError ? error.message : (error instanceof Error ? error.message : String(error));
          return { content: [{ type: 'text', text: `Error monitoring database: ${errorMessage}` }], isError: true };
        }
      }
    };
  • Core execution logic for database monitoring. Connects to the PostgreSQL database, fetches various metrics (database stats, tables, active queries, locks, replication), generates alerts based on configurable thresholds, and returns structured monitoring results.
    async function executeMonitorDatabase(
      input: MonitorDatabaseInput,
      getConnectionString: GetConnectionStringFn
    ): Promise<MonitoringResult> {
      const resolvedConnectionString = getConnectionString(input.connectionString);
      const db = DatabaseConnection.getInstance();
      const alerts: Alert[] = [];
      const { includeTables, includeQueries, includeLocks, includeReplication, alertThresholds } = input;
      
      try {
        await db.connect(resolvedConnectionString);
        
        const now = new Date();
        const timestamp = now.toISOString();
        
        const dbMetrics = await getDatabaseMetrics(db);
        
        if (alertThresholds?.connectionPercentage && 
            (dbMetrics.connections.total / dbMetrics.connections.max) * 100 > alertThresholds.connectionPercentage) {
          const percentage = (dbMetrics.connections.total / dbMetrics.connections.max) * 100;
          alerts.push({
            level: percentage > 90 ? 'critical' : 'warning',
            message: `High connection usage: ${percentage.toFixed(1)}%`,
            context: {
              current: dbMetrics.connections.total,
              max: dbMetrics.connections.max
            }
          });
        }
        
        if (alertThresholds?.cacheHitRatio && 
            dbMetrics.cacheHitRatio < alertThresholds.cacheHitRatio) {
          alerts.push({
            level: dbMetrics.cacheHitRatio < 0.8 ? 'critical' : 'warning',
            message: `Low cache hit ratio: ${(dbMetrics.cacheHitRatio * 100).toFixed(1)}%`,
            context: {
              current: dbMetrics.cacheHitRatio
            }
          });
        }
        
        const tableMetricsResult: Record<string, TableMetrics> = {};
        if (includeTables) {
          const tables = await getTableMetrics(db);
          
          for (const table of tables) {
            tableMetricsResult[table.name] = table;
            
            if (alertThresholds?.deadTuplesPercentage) {
              const deadTuplePercentage = table.rowCount > 0 
                ? (table.deadTuples / table.rowCount) * 100 
                : 0;
                
              if (deadTuplePercentage > alertThresholds.deadTuplesPercentage) {
                alerts.push({
                  level: deadTuplePercentage > 30 ? 'critical' : 'warning',
                  message: `High dead tuple percentage in table ${table.name}: ${deadTuplePercentage.toFixed(1)}%`,
                  context: {
                    table: table.name,
                    deadTuples: table.deadTuples,
                    totalRows: table.rowCount
                  }
                });
              }
            }
            
            if (alertThresholds?.vacuumAge && table.lastVacuum) {
              const lastVacuumDate = new Date(table.lastVacuum);
              const daysSinceVacuum = Math.floor((now.getTime() - lastVacuumDate.getTime()) / (1000 * 60 * 60 * 24));
              
              if (daysSinceVacuum > alertThresholds.vacuumAge) {
                alerts.push({
                  level: 'warning',
                  message: `Table ${table.name} hasn't been vacuumed in ${daysSinceVacuum} days`,
                  context: {
                    table: table.name,
                    lastVacuum: table.lastVacuum
                  }
                });
              }
            }
          }
        }
        
        let activeQueriesResult: ActiveQueryInfo[] = [];
        if (includeQueries) {
          activeQueriesResult = await getActiveQueries(db);
          
          if (alertThresholds?.longRunningQuerySeconds) {
            const threshold = alertThresholds.longRunningQuerySeconds;
            const longRunningQueries = activeQueriesResult.filter(
              q => q.duration > threshold
            );
            
            for (const query of longRunningQueries) {
              alerts.push({
                level: query.duration > threshold * 2 ? 'critical' : 'warning',
                message: `Long-running query (${query.duration.toFixed(1)}s) by ${query.username}`,
                context: {
                  pid: query.pid,
                  duration: query.duration,
                  query: query.query.substring(0, 100) + (query.query.length > 100 ? '...' : '')
                }
              });
            }
          }
        }
        
        let locksResult: LockInfo[] = [];
        if (includeLocks) {
          locksResult = await getLockInfo(db);
          
          const blockingLocks = locksResult.filter(l => !l.granted);
          if (blockingLocks.length > 0) {
            alerts.push({
              level: 'warning',
              message: `${blockingLocks.length} blocking locks detected`,
              context: {
                count: blockingLocks.length
              }
            });
          }
        }
        
        let replicationResult: ReplicationInfo[] = [];
        if (includeReplication) {
          replicationResult = await getReplicationInfo(db);
          
          for (const replica of replicationResult) {
            if (replica.replayLag) {
              const lagMatch = replica.replayLag.match(/(\d+):(\d+):(\d+)/);
              if (lagMatch) {
                const hours = Number.parseInt(lagMatch[1]);
                const minutes = Number.parseInt(lagMatch[2]);
                
                if (hours > 0 || minutes > 5) {
                  alerts.push({
                    level: hours > 0 ? 'critical' : 'warning',
                    message: `High replication lag for ${replica.clientAddr}: ${replica.replayLag}`,
                    context: {
                      clientAddr: replica.clientAddr,
                      lag: replica.replayLag
                    }
                  });
                }
              }
            }
          }
        }
        
        return {
          timestamp,
          metrics: {
            database: dbMetrics,
            tables: tableMetricsResult,
            queries: activeQueriesResult,
            locks: locksResult,
            replication: includeReplication ? replicationResult : undefined
          },
          alerts
        };
      } catch (error) {
        console.error("Error monitoring database:", error);
        throw new McpError(ErrorCode.InternalError, `Failed to monitor database: ${error instanceof Error ? error.message : String(error)}`);
      } finally {
        await db.disconnect();
      }
    }
  • Zod input schema definition for the pg_monitor_database tool, including optional flags for what metrics to include and alert thresholds.
    const AlertThresholdsSchema = z.object({
      connectionPercentage: z.number().min(0).max(100).optional().describe("Connection usage percentage threshold"),
      longRunningQuerySeconds: z.number().positive().optional().describe("Long-running query threshold in seconds"),
      cacheHitRatio: z.number().min(0).max(1).optional().describe("Cache hit ratio threshold"),
      deadTuplesPercentage: z.number().min(0).max(100).optional().describe("Dead tuples percentage threshold"),
      vacuumAge: z.number().positive().int().optional().describe("Vacuum age threshold in days"),
    }).describe("Alert thresholds");
    
    const MonitorDatabaseInputSchema = z.object({
      connectionString: z.string().optional(),
      includeTables: z.boolean().optional().default(false),
      includeQueries: z.boolean().optional().default(false),
      includeLocks: z.boolean().optional().default(false),
      includeReplication: z.boolean().optional().default(false),
      alertThresholds: AlertThresholdsSchema.optional(),
    });
  • src/index.ts:225-257 (registration)
    Registration of the monitorDatabaseTool (pg_monitor_database) in the central allTools array, which is passed to the MCP server constructor to make tools available.
    const allTools: PostgresTool[] = [
      // Core Analysis & Debugging
      analyzeDatabaseTool,
      debugDatabaseTool,
      
      // Schema & Structure Management (Meta-Tools)
      manageSchemaTools,
      manageFunctionsTool,
      manageTriggersTools,
      manageIndexesTool,
      manageConstraintsTool,
      manageRLSTool,
      
      // User & Security Management
      manageUsersTool,
      
      // Query & Performance Management
      manageQueryTool,
      
      // Data Operations (Enhancement Tools)
      executeQueryTool,
      executeMutationTool,
      executeSqlTool,
      
      // Documentation & Metadata
      manageCommentsTool,
      
      // Data Migration & Monitoring
      exportTableDataTool,
      importTableDataTool,
      copyBetweenDatabasesTool,
      monitorDatabaseTool
    ];
  • Helper function to fetch core database metrics like size, connections, uptime, transactions, and cache hit ratio using PostgreSQL system views.
    async function getDatabaseMetrics(db: DatabaseConnection): Promise<DatabaseMetrics> {
      const dbInfo = await db.queryOne<{
        db_name: string;
        db_size: string;
        uptime: string;
        committed_tx: string;
        rolled_back_tx: string;
      }>(
        `SELECT datname as db_name, pg_size_pretty(pg_database_size(current_database())) as db_size, 
                (now() - pg_postmaster_start_time())::text as uptime, 
                xact_commit as committed_tx, xact_rollback as rolled_back_tx 
         FROM pg_stat_database WHERE datname = current_database()`
      );
      
      const connInfo = await db.queryOne<{
        active_connections: string;
        idle_connections: string;
        total_connections: string;
        max_connections: string;
      }>(
        `SELECT 
          (SELECT count(*) FROM pg_stat_activity WHERE state = 'active') as active_connections, 
          (SELECT count(*) FROM pg_stat_activity WHERE state = 'idle') as idle_connections, 
          (SELECT count(*) FROM pg_stat_activity) as total_connections, 
          setting as max_connections 
         FROM pg_settings WHERE name = 'max_connections'`
      );
      
      const cacheHit = await db.queryOne<{
        cache_hit_ratio: number;
      }>(
        `SELECT sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as cache_hit_ratio 
         FROM pg_statio_user_tables WHERE (heap_blks_hit + heap_blks_read) > 0`
      );
      
      if (!dbInfo || !connInfo || !cacheHit) {
        throw new Error('Failed to retrieve core database metrics');
      }
      
      return {
        name: dbInfo.db_name,
        size: dbInfo.db_size,
        connections: {
          active: Number.parseInt(connInfo.active_connections),
          idle: Number.parseInt(connInfo.idle_connections),
          total: Number.parseInt(connInfo.total_connections),
          max: Number.parseInt(connInfo.max_connections)
        },
        uptime: dbInfo.uptime,
        transactions: {
          committed: Number.parseInt(dbInfo.committed_tx),
          rolledBack: Number.parseInt(dbInfo.rolled_back_tx)
        },
        cacheHitRatio: cacheHit.cache_hit_ratio || 0,
      };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'Get real-time monitoring information,' implying a read-only operation, but doesn't specify whether this requires specific permissions, has rate limits, returns structured data, or involves any side effects. For a monitoring tool with 6 parameters and no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (6 parameters, nested objects, no output schema, and no annotations), the description is incomplete. It doesn't address the tool's behavior, output format, or parameter usage, which are crucial for a monitoring tool with multiple configuration options. This leaves significant gaps for an AI agent to understand how to invoke it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low at 17%, with only the 'alertThresholds' object having descriptions. The description doesn't add any meaning beyond the schema, such as explaining what 'includeTables' or 'connectionString' entail in the context of monitoring. It fails to compensate for the poor schema coverage, leaving most parameters semantically unclear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('real-time monitoring information for a PostgreSQL database'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from siblings like 'pg_analyze_database' or 'pg_debug_database', which might also provide database insights, so it misses full sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'pg_analyze_database' or 'pg_debug_database'. It lacks context about prerequisites, such as needing a valid connection string, or exclusions, leaving the agent to infer usage based on the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/HenkDz/postgresql-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server