Skip to main content
Glama
by yty-build

PostgreSQL MCP AllAccess

Production-ready PostgreSQL MCP server with global connection pooling, PgBouncer integration, and automatic password rotation for AWS RDS.

Quick Start (2 minutes)

With AWS Secrets Manager

# 1. Set AWS credentials export AWS_ACCESS_KEY_ID=your_key export AWS_SECRET_ACCESS_KEY=your_secret export AWS_REGION=us-west-2 # 2. Deploy (fetches DB credentials from AWS) python3 deploy_with_aws_secrets.py # 3. Restart Claude Code to connect # The deploy script shows the URL - it's http://localhost:3000/sse

That's it! The MCP server is now running and Claude Code will auto-detect it.

Optional: Set custom AWS secret names:

export AWS_SECRET_NAME=your/secret/name export AWS_PASSWORD_SECRET_NAME=rds!your-password-id python3 deploy_with_aws_secrets.py

Without AWS (Simple Credentials)

# 1. Copy .env.example and fill in your database credentials cd /path/to/postgres_mcp_allaccess cp .env.example .env # Edit .env and fill in the REQUIRED fields (first 5 lines) # 2. Deploy (docker-compose reads .env automatically) docker-compose up -d # 3. Restart Claude Code to connect # MCP server is at http://localhost:3000/sse

What You Get

Global connection pooling - 15 max connections to PostgreSQL (configurable) ✅ Automatic password rotation - Zero downtime when AWS rotates passwords ✅ Transaction-level pooling - Efficient connection reuse across all queries ✅ Production ready - Health checks, logging, monitoring built-in

How It Works

Multiple Claude Sessions → MCP Server → PgBouncer → PostgreSQL (90+ clients) (max 15) (max 15) (sees ≤15 connections)
  • PgBouncer enforces max 15 connections to your PostgreSQL database

  • Automatic queueing - Extra requests wait, then process when connections free up

  • Zero config needed - Default settings work for most use cases

Common Tasks

Check if it's running

curl http://localhost:3000/health # Should show: "OK - Pool: 2/15 connections"

View logs

docker-compose logs -f

Stop server

docker-compose down

Change connection limits

Edit pgbouncer/pgbouncer.ini:

default_pool_size = 10 # Max connections to PostgreSQL

Then redeploy: docker-compose up -d --build

Available Tools (Use in Claude)

Query Execution:

  • list_tables - List tables in a schema

  • execute_query - Run any SQL query

  • describe_table - See table structure

  • execute_file - Run SQL from a file

Schema Operations:

  • list_schemas - List all schemas

  • search_tables - Find tables by name pattern

  • get_database_context - Overview of database structure

Analysis:

  • explain_query - Get query execution plan

  • analyze_query - Performance analysis

  • suggest_indexes - Index recommendations

Session:

  • get_session_info - Current session status

  • get_query_history - View query history

Troubleshooting

Claude Code can't connect

Solution: Restart Claude Code after deploying the MCP server.

"Pool not initialized" error

Solution: Run any query first - pool initializes on first use.

Container won't start

# Check what went wrong docker-compose logs # Common fixes: # 1. Port 3000 in use: Change MCP_SSE_PORT in .env # 2. Bad credentials: Check .env file or AWS secrets # 3. Build cache issue: docker-compose up -d --build --force-recreate

Connection timeout

Increase pool timeout (default 30 seconds):

export POSTGRES_POOL_TIMEOUT=60 python3 deploy_with_aws_secrets.py # or docker-compose up -d

Too many connections to PostgreSQL

Check your limit:

docker exec -e PGPASSWORD='your_pass' postgres-mcp-allaccess \ psql -h localhost -p 6432 -U your_user -d your_db \ -c "SELECT count(*) FROM pg_stat_activity WHERE usename='your_user'"

Should show 2-15 connections (never exceeds default_pool_size).

Configuration

Connection Pool Settings

Environment variables (set before deploying):

POSTGRES_MAX_CONNECTIONS=15 # MCP pool size POSTGRES_MIN_CONNECTIONS=2 # Warm connections POSTGRES_POOL_TIMEOUT=30 # Queue wait time (seconds)

PgBouncer limits (edit pgbouncer/pgbouncer.ini):

default_pool_size = 15 # MAX connections to PostgreSQL min_pool_size = 2 # MIN kept alive pool_mode = transaction # Release after each transaction

Understanding the Pool

Question: If 90 Claude sessions query at once, does PostgreSQL see 90 connections?

Answer: NO! PostgreSQL sees maximum 15 connections (or whatever you set in default_pool_size).

  • Connections auto-release after each query

  • Extra requests queue and process when connections free up

  • No session management needed - everything is automatic

Automatic Password Rotation

When AWS rotates your password:

  1. Next query fails with auth error

  2. MCP fetches new password from AWS Secrets Manager

  3. Updates both MCP pool and PgBouncer

  4. Retries query - succeeds!

Zero downtime. No restart needed.

Requirements:

  • Deploy with deploy_with_aws_secrets.py (AWS credentials available)

  • AWS secret names configured (done automatically)

Security

Never commit:

  • .env files

  • *.log files

  • config/postgres_config.ini

Safe to commit:

  • AWS secret names (e.g., "postgres/dev/db")

  • Database hostnames (e.g., "mydb.rds.amazonaws.com")

All sensitive data is gitignored automatically.

Advanced

Project Structure

postgres_mcp_allaccess/ ├── deploy_with_aws_secrets.py # AWS deployment script ├── docker-compose.yml # Container orchestration ├── Dockerfile # Single container (MCP + PgBouncer) ├── docker-entrypoint.sh # Startup script ├── pgbouncer/pgbouncer.ini # Connection pool config └── src/postgres_mcp_allaccess/ ├── server.py # MCP server ├── database.py # Global pool + auto rotation ├── pgbouncer_manager.py # PgBouncer control └── transports/sse_transport.py # HTTP/SSE transport

Health Monitoring

# Health check endpoint curl http://localhost:3000/health # Check PgBouncer process docker exec postgres-mcp-allaccess ps aux | grep pgbouncer # View all environment variables docker exec postgres-mcp-allaccess env | grep -E "POSTGRES|AWS|PGBOUNCER"

Manual Password Refresh

# Force refresh from AWS (if auto-rotation fails) docker-compose restart

License

Apache License 2.0 - See LICENSE file for details.

Support

-
security - not tested
A
license - permissive license
-
quality - not tested

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yty-build/postgres_mcp_allaccess'

If you have feedback or need assistance with the MCP directory API, please join our Discord server