Skip to main content
Glama

Server Details

Comprehensive PostgreSQL documentation and best practices, including ecosystem tools

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
timescale/pg-aiguide
GitHub Stars
1,439

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

2 tools
search_docsSearch DocumentationA
Read-onlyIdempotent
Inspect

Search documentation using semantic or keyword search. Supports Tiger Cloud (TimescaleDB), PostgreSQL, and PostGIS.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitYesThe maximum number of matches to return. If not provided, default is 10.
queryYesThe search query. For semantic search, use natural language. For keyword search, provide keywords.
sourceYesThe documentation source to search. "tiger" for Tiger Cloud and TimescaleDB, "postgres" for PostgreSQL, "postgis" for PostGIS spatial extension. Specific versions provided with _X.X suffixes.
search_typeYesThe type of search to perform. "semantic" uses natural language vector similarity, "keyword" uses BM25 keyword matching.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultsYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, indicating this is a safe, repeatable read operation. The description adds useful context about supported documentation sources and search types, but doesn't disclose additional behavioral traits like rate limits, authentication needs, or what constitutes a 'match' in results. It doesn't contradict annotations, but adds only moderate value beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise at two sentences. The first sentence states the core functionality, and the second sentence lists supported sources without redundancy. Every word earns its place, and the information is front-loaded with the primary purpose stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 required parameters), excellent schema coverage (100%), presence of annotations, and existence of an output schema (which handles return values), the description is complete enough. It covers what the tool does, supported sources, and search types - providing adequate context without needing to duplicate structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already thoroughly documents all four parameters (source, search_type, query, limit) including their purposes, enums, and defaults. The description mentions 'semantic or keyword search' and supported sources, which aligns with but doesn't significantly expand upon the schema's parameter documentation. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search documentation using semantic or keyword search.' It specifies the verb ('search') and resource ('documentation'), and distinguishes it from the only sibling tool 'view_skill' by focusing on search functionality rather than viewing. The mention of supported documentation sources (Tiger Cloud, PostgreSQL, PostGIS) adds specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by specifying the supported documentation sources and search types (semantic/keyword). However, it doesn't explicitly state when NOT to use it or mention alternatives (e.g., when to use 'view_skill' instead). The guidance is helpful but lacks explicit exclusion criteria or sibling tool comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

view_skillView SkillC
Read-onlyIdempotent
Inspect

Retrieve detailed skills for TimescaleDB operations and best practices.

Available Skills

<available_skills> [8 ]{name description}: design-postgis-tables Comprehensive PostGIS spatial table design reference covering geometry types, coordinate systems, spatial indexing, and performance patterns for location-based applications design-postgres-tables "Use this skill for general PostgreSQL table design.\n\nTrigger when user asks to:\n- Design PostgreSQL tables, schemas, or data models when creating new tables and when modifying existing ones.\n- Choose data types, constraints, or indexes for PostgreSQL\n- Create user tables, order tables, reference tables, or JSONB schemas\n- Understand PostgreSQL best practices for normalization, constraints, or indexing\n- Design update-heavy, upsert-heavy, or OLTP-style tables\n\n\nKeywords: PostgreSQL schema, table design, data types, PRIMARY KEY, FOREIGN KEY, indexes, B-tree, GIN, JSONB, constraints, normalization, identity columns, partitioning, row-level security\n\nComprehensive reference covering data types, indexing strategies, constraints, JSONB patterns, partitioning, and PostgreSQL-specific best practices.\n" find-hypertable-candidates "Use this skill to analyze an existing PostgreSQL database and identify which tables should be converted to Timescale/TimescaleDB hypertables.\n\nTrigger when user asks to:\n- Analyze database tables for hypertable conversion potential\n- Identify time-series or event tables in an existing schema\n- Evaluate if a table would benefit from Timescale/TimescaleDB\n- Audit PostgreSQL tables for migration to Timescale/TimescaleDB/TigerData\n- Score or rank tables for hypertable candidacy\n\n\nKeywords: hypertable candidate, table analysis, migration assessment, Timescale, TimescaleDB, time-series detection, insert-heavy tables, event logs, audit tables\n\nProvides SQL queries to analyze table statistics, index patterns, and query patterns. Includes scoring criteria (8+ points = good candidate) and pattern recognition for IoT, events, transactions, and sequential data.\n" migrate-postgres-tables-to-hypertables "Use this skill to migrate identified PostgreSQL tables to Timescale/TimescaleDB hypertables with optimal configuration and validation.\n\nTrigger when user asks to:\n- Migrate or convert PostgreSQL tables to hypertables\n- Execute hypertable migration with minimal downtime\n- Plan blue-green migration for large tables\n- Validate hypertable migration success\n- Configure compression after migration\n\nPrerequisites: Tables already identified as candidates (use find-hypertable-candidates first if needed)\n\nKeywords: migrate to hypertable, convert table, Timescale, TimescaleDB, blue-green migration, in-place conversion, create_hypertable, migration validation, compression setup\n\nStep-by-step migration planning including: partition column selection, chunk interval calculation, PK/constraint handling, migration execution (in-place vs blue-green), and performance validation queries.\n" pgvector-semantic-search "Use this skill for setting up vector similarity search with pgvector for AI/ML embeddings, RAG applications, or semantic search.\n\nTrigger when user asks to:\n- Store or search vector embeddings in PostgreSQL\n- Set up semantic search, similarity search, or nearest neighbor search\n- Create HNSW or IVFFlat indexes for vectors\n- Implement RAG (Retrieval Augmented Generation) with PostgreSQL\n- Optimize pgvector performance, recall, or memory usage\n- Use binary quantization for large vector datasets\n\nKeywords: pgvector, embeddings, semantic search, vector similarity, HNSW, IVFFlat, halfvec, cosine distance, nearest neighbor, RAG, LLM, AI search\n\nCovers: halfvec storage, HNSW index configuration (m, ef_construction, ef_search), quantization strategies, filtered search, bulk loading, and performance tuning.\n" postgres "Use this skill for any PostgreSQL database work — table design, indexing, data types, constraints, extensions (pgvector, PostGIS, TimescaleDB), search, and migrations.\n\nTrigger when user asks to:\n- Design or modify PostgreSQL tables, schemas, or data models\n- Choose data types, constraints, indexes, or partitioning strategies\n- Work with pgvector embeddings, semantic search, or RAG\n- Set up full-text search, hybrid search, or BM25 ranking\n- Use PostGIS for spatial/geographic data\n- Set up TimescaleDB hypertables for time-series data\n- Migrate tables to hypertables or evaluate migration candidates\n\nKeywords: PostgreSQL, Postgres, SQL, schema, table design, indexes, constraints, pgvector, PostGIS, TimescaleDB, hypertable, semantic search, hybrid search, BM25, time-series\n" postgres-hybrid-text-search "Use this skill to implement hybrid search combining BM25 keyword search with semantic vector search using Reciprocal Rank Fusion (RRF).\n\nTrigger when user asks to:\n- Combine keyword and semantic search\n- Implement hybrid search or multi-modal retrieval\n- Use BM25/pg_textsearch with pgvector together\n- Implement RRF (Reciprocal Rank Fusion) for search\n- Build search that handles both exact terms and meaning\n\n\nKeywords: hybrid search, BM25, pg_textsearch, RRF, reciprocal rank fusion, keyword search, full-text search, reranking, cross-encoder\n\nCovers: pg_textsearch BM25 index setup, parallel query patterns, client-side RRF fusion (Python/TypeScript), weighting strategies, and optional ML reranking.\n" setup-timescaledb-hypertables "Use this skill when creating database schemas or tables for Timescale, TimescaleDB, TigerData, or Tiger Cloud, especially for time-series, IoT, metrics, events, or log data. Use this to improve the performance of any insert-heavy table.\n\nTrigger when user asks to:\n- Create or design SQL schemas/tables AND Timescale/TimescaleDB/TigerData/Tiger Cloud is available\n- Set up hypertables, compression, retention policies, or continuous aggregates\n- Configure partition columns, segment_by, order_by, or chunk intervals\n- Optimize time-series database performance or storage\n- Create tables for sensors, metrics, telemetry, events, or transaction logs\n\nKeywords: CREATE TABLE, hypertable, Timescale, TimescaleDB, time-series, IoT, metrics, sensor data, compression policy, continuous aggregates, columnstore, retention policy, chunk interval, segment_by, order_by\n\nStep-by-step instructions for hypertable creation, column selection, compression policies, retention, continuous aggregates, and indexes.\n" </available_skills>

ParametersJSON Schema
NameRequiredDescriptionDefault
pathYesA relative path to a file or directory within the skill to view. If empty, will view the `SKILL.md` file by default. Use `.` to list the root directory of the skill.
skill_nameYesThe name of the skill to browse, or `.` to list all available skills.

Output Schema

ParametersJSON Schema
NameRequiredDescription
contentYesThe content of the file or directory listing.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, indicating this is a safe, non-destructive read operation. The description adds value by listing available skills, which provides context on what can be retrieved, but it doesn't disclose additional behavioral traits like rate limits, authentication needs, or error handling. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is overly long and poorly structured. The first sentence is concise, but it's followed by a lengthy, unformatted list of available skills that belongs elsewhere (e.g., in output schema or separate documentation). This adds noise without enhancing tool understanding, making it inefficient and not front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 parameters, annotations, and an output schema), the description is partially complete. It explains what skills are available but lacks context on how to use the tool effectively. The output schema existence means return values needn't be described, but the description fails to provide adequate usage context or integration with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents both parameters (skill_name and path). The description does not add any parameter-specific semantics beyond what the schema provides, such as examples or clarifications on skill_name values. Baseline score of 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retrieve detailed skills for TimescaleDB operations and best practices.' It specifies the verb 'retrieve' and the resource 'skills,' but it doesn't explicitly differentiate from its sibling tool 'search_docs' (which likely searches documentation rather than retrieving skill details). The purpose is clear but lacks sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lists available skills in detail, but this is more of a catalog than usage instructions. There is no mention of when to use 'view_skill' over 'search_docs' or other potential tools, nor any context about prerequisites or typical scenarios for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.