ThinAir Data
Server Details
Connect your AI to any database — PostgreSQL, MySQL, or SQL Server — in seconds.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 23 of 23 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose. For instance, `analyze_table` vs `data_profile` are differentiated by scope (quick stats vs full quality report). Other tools like `explain_query`, `optimize_query`, and `suggest_queries` serve different needs. No overlapping functionality exists.
All tool names follow a consistent `verb_noun` pattern in lowercase snake_case (e.g., `analyze_table`, `list_connections`, `generate_migration`). The naming is predictable and intuitive, with no mixed conventions or abbreviations causing confusion.
With 23 tools, the set is slightly larger than the recommended 3-15 range, but each tool serves a justified purpose in the database management domain. No tool feels redundant or unnecessary; the count is appropriate for the server's comprehensive scope.
The tool surface is extensive, covering schema discovery, querying, optimization, analysis, migrations, monitoring, and security. Minor gaps exist (e.g., no migration execution, no schema diff tool), but the core workflows are well-supported and agents can accomplish most tasks without dead ends.
Available Tools
23 toolsanalyze_tableARead-onlyIdempotentInspect
QUICK statistical snapshot for ONE table — row count, null rates, cardinality, numeric min/max/avg, date ranges. Optionally drill into a specific column. Use this for a fast at-a-glance read. Use data_profile instead when the user wants a FULL quality report including PII detection and a health score.
| Name | Required | Description | Default |
|---|---|---|---|
| table | Yes | Table name to analyze | |
| column | No | Specific column to deep-analyze | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It describes the tool as providing a statistical snapshot without destructive actions, but could explicitly state read-only nature. However, the description gives a good sense of behavior and does not contradict any implicit expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. Front-loaded with purpose and outputs, followed by usage guidance. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description lists the statistics returned (row count, null rates, etc.), providing reasonable expectations. Could mention the return format or behavior for the column drill, but overall sufficient for a simple analysis tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds minimal extra meaning beyond schema, e.g., 'Optionally drill into a specific column' aligns with column param but schema already says 'Specific column to deep-analyze'. No additional parameter context provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'QUICK statistical snapshot for ONE table' and lists specific metrics (row count, null rates, etc.), providing a specific verb-resource combo. It distinguishes from sibling 'data_profile' by emphasizing speed vs. full quality report.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance: 'Use this for a fast at-a-glance read' and tells when to use alternative: 'Use data_profile instead when the user wants a FULL quality report including PII detection and a health score.' This clearly defines when to choose this tool over a sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cross_db_queryARead-onlyIdempotentInspect
⚠️ SQL MUST BE VALID IN EVERY DIALECT YOU TARGET — stick to ANSI-ish SELECT syntax when mixing pg/mysql/mssql. SELECT TOP 10 (mssql) or LIMIT (others) will fail on the wrong side. Run the same query across 2-4 connections in parallel; returns per-connection rows + errors for diffing. Canonical use cases: regional compare (['mssql-reporting-us', 'mssql-reporting-eu']), cross-dialect sync check (['prod-postgres-fleet', 'prod-mysql-app']), 3-env drift, 4-region compare. Resolve every connection name via list_connections first; tool fails per-connection on unknown names. ARCHITECT-tier cap: 4 connections; https://www.thinair.co/ for unlimited. [ARCHITECT tier]
| Name | Required | Description | Default |
|---|---|---|---|
| sql | Yes | SQL query to run on each connection. Must be valid in every dialect targeted — prefer ANSI SELECT syntax when mixing dialects. | |
| connections | Yes | Array of 2-4 connection NAMES (not IDs) from list_connections output. Examples: ['mssql-reporting-us', 'mssql-reporting-eu'] for regional compare, ['prod-postgres-fleet', 'prod-mysql-app'] for cross-dialect sync check. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses parallel execution, per-connection error handling, failure on unknown connections, dialect sensitivity, and a 4-connection cap. Since no annotations exist, the description fully carries the transparency burden.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is dense but well-organized, front-loading a critical warning. It could be slightly improved with bullet points, but it remains concise and informative without excessive verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity and lack of output schema, the description covers input requirements, return format (rows+errors), error conditions, tier limitations, and required preparation. It is fully adequate for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds significant meaning beyond the input schema: clarifies that sql must be ANSI-standard, connections must be names from list_connections, and provides detailed examples. The descriptions in the schema are also enhanced by the tool description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool runs the same query across 2-4 connections in parallel, returning per-connection rows and errors for diffing. It uses specific verbs and resources, and the parallel multi-connection nature distinguishes it from siblings like query_sql.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit canonical use cases (regional compare, cross-dialect sync check, drift detection) and warns about SQL dialect compatibility and the need to resolve connection names via list_connections first. Also mentions tier cap and contact for unlimited.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
data_profileARead-onlyIdempotentInspect
FULL data quality + compliance report for a table: per-column stats PLUS a 0-100 health score, type-gated PII detection (email / phone / SSN / etc.), and insight warnings. Slower than analyze_table but returns everything needed to audit a table for ownership / compliance / onboarding. Use this when the user says 'profile' or 'quality report' or mentions PII/compliance. [BUILD tier]
| Name | Required | Description | Default |
|---|---|---|---|
| table | Yes | Table to profile | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the tool is comprehensive, slower, and returns a health score and PII detection. However, it does not explicitly state that the tool is read-only or non-destructive, which could be inferred but not stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, two sentences long, with no filler. It front-loads the core purpose and follows with usage guidance. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description fully explains the return value (per-column stats, health score, PII detection, warnings). It also covers the trade-off (slower vs analyze_table) and use cases, making it complete for an AI agent to decide.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so both parameters are documented. The description adds significant value for the `connection` parameter by instructing how to resolve it using `list_connections` and semantic matching, going beyond the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates a FULL data quality and compliance report for a table, including per-column stats, health score, PII detection, and warnings. It distinguishes itself from the sibling `analyze_table` by noting it is slower but more comprehensive.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: when the user says 'profile' or 'quality report' or mentions PII/compliance. Also implies when not to use by noting it is slower than `analyze_table`, providing clear decision guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
describe_schemaBRead-onlyIdempotentInspect
Discover the full database schema: tables, columns, types, primary keys, foreign keys, and indexes. Results cached 1 hour. Call with refresh=true after schema changes.
| Name | Required | Description | Default |
|---|---|---|---|
| refresh | No | Force live introspection, bypassing cache | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses caching behavior and refresh mechanism. No annotations exist, so description carries burden. Missing details on permissions or output handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with clear front-loading of purpose and concise caching guidance. No extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema and description omits return format. Does not guide agent to call sibling 'list_connections' first. Lacks instruction on result structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both parameters fully (100% coverage). Description adds minor value by reinforcing refresh usage but does not supplement the connection parameter beyond what schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'Discover the full database schema' with explicit listing of schema elements (tables, columns, types, primary keys, foreign keys, indexes). Distinct from siblings like 'analyze_table' or 'list_connections'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides caching and refresh behavior ('Results cached 1 hour. Call with refresh=true after schema changes.') but does not compare with alternative tools or specify when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
detect_anomaliesARead-onlyIdempotentInspect
Scan a table for unusual patterns: volume drops/spikes, data gaps, value concentration, high null rates, stale data. Severity-ranked alerts. Tables > 100k rows use a sampled path (~5%) — when a finding has sampled:true, surface it to the user with a hedge like 'based on a ~5% sample' rather than presenting the number as exact. Dialect-aware: TABLESAMPLE SYSTEM on postgres, TABLESAMPLE PERCENT on mssql, WHERE RAND() on mysql.
| Name | Required | Description | Default |
|---|---|---|---|
| table | Yes | Table to scan for anomalies | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. | |
| date_column | No | Date column for trend analysis (auto-detected if omitted) |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description reveals that tables > 100k rows use a sampled path (~5%) and explains how to present findings with 'sampled:true'. It also discloses dialect-aware sampling behavior. No annotations were provided, so the description carries the full burden, and it does so well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the main purpose. Every sentence adds value, though it could be slightly more concise. The structure is logical, progressing from purpose to behavior details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description mentions 'Severity-ranked alerts' and the sampled flag but does not detail the full return structure. With no output schema, more information about the alert format would improve completeness. Sibling tools like data_profile might offer comparison context, but the description stands alone.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so baseline is 3. The description adds minimal extra semantic value beyond the schema: it notes that date_column is auto-detected if omitted. No deep parameter semantics are provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Scan a table for unusual patterns' and lists specific anomaly types (volume drops/spikes, data gaps, value concentration, high null rates, stale data). It distinguishes from sibling tools like analyze_table and data_profile by focusing on anomaly detection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context for large tables (sampled path) and dialect-specific sampling, but lacks explicit guidance on when to use this tool versus alternatives like analyze_table or data_profile. No exclusion criteria or when-not-to-use advice is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
explain_queryARead-onlyIdempotentInspect
Analyze a SQL query's execution plan and return plain-English performance recommendations. Runs EXPLAIN ANALYZE (Postgres) or EXPLAIN FORMAT=JSON (MySQL). [BUILD tier]
| Name | Required | Description | Default |
|---|---|---|---|
| sql | Yes | The SELECT query to analyze | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It fails to disclose that EXPLAIN ANALYZE actually executes the SELECT query (which may have performance impact), and does not specify output structure or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the main purpose, plus a parenthetical note on database variants and tier. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description is adequate for understanding input but lacks details on output format and the execution behavior of EXPLAIN ANALYZE. Sibling tool overlap is not addressed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds valuable detail beyond schema: it restricts 'sql' to SELECT queries and provides detailed guidance on resolving the 'connection' parameter via list_connections.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it analyzes a SQL query's execution plan and returns plain-English performance recommendations, and specifies the underlying EXPLAIN variants for Postgres and MySQL. This distinguishes it from siblings like optimize_query or query_sql.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions a '[BUILD tier]' but gives no explicit guidance on when to use this tool versus alternatives like optimize_query or query_sql. Usage is implied but not clearly bounded.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_n_plus_oneCRead-onlyIdempotentInspect
Detect N+1 query patterns from recent query history. Fingerprints queries and flags repeated patterns. [ARCHITECT tier]
| Name | Required | Description | Default |
|---|---|---|---|
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. | |
| min_executions | No | Minimum executions to flag (default 5) |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavioral traits. It mentions fingerprinting and flagging but omits whether the tool is read-only, requires specific permissions, or what the output format is. This leaves significant ambiguity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus a tag are concise and front-loaded with the purpose. Every sentence adds value, though the tag is non-essential. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should explain return values or results. It does not mention what the tool outputs (e.g., a report, list of queries). This incompleteness undermines agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, with both parameters (connection, min_executions) described in detail in the schema. The tool description does not add parameter semantics beyond what the schema provides, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool detects N+1 query patterns from recent query history using fingerprinting and flagging. It avoids tautology and differentiates from similar tools like query_history, though sibling differentiation from detect_anomalies is implicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like query_history or detect_anomalies. The '[ARCHITECT tier]' tag hints at audience but does not provide usage context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_migrationARead-onlyIdempotentInspect
Generate dialect-correct ALTER TABLE migration SQL + rollback from a plain-English intent. Output uses the connection's exact dialect (ALTER TABLE for all three, plus pg-specific USING casts / mssql-specific sp_rename / mysql-specific MODIFY COLUMN). Never executes. Check response dialect field before manually editing — don't hand-translate across dialects. [BUILD tier]
| Name | Required | Description | Default |
|---|---|---|---|
| intent | Yes | Plain English: 'add soft delete to users', 'add index on trips.driver_id' | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It clearly states never executes, uses exact dialect, lists dialect-specific features, and warns about hand-translation. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Under 100 words, front-loaded main purpose, well-structured with all key points. No wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description explains output includes migration SQL, rollback, and dialect field. Covers inputs and behavior fully for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions, but description adds examples for `intent` and detailed resolution guidance for `connection`, referencing list_connections and default fallback.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it generates dialect-correct ALTER TABLE migration SQL + rollback from plain-English intent, and never executes. It distinguishes from siblings like query_sql by focusing on migration generation rather than execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use: generating migration from plain English. Warns against hand-translating across dialects and advises to check response dialect before editing. Implicitly cautions not to use for execution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_seed_dataARead-onlyIdempotentInspect
Generate realistic, schema-aware INSERT statements for development and testing. Respects types, constraints, and FK relationships. Never executes. [BUILD tier]
| Name | Required | Description | Default |
|---|---|---|---|
| table | Yes | Table to generate seed data for | |
| format | No | Output format (default sql) | |
| row_count | No | Number of rows (default 100, max 1000) | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that the tool never executes (safety), respects types, constraints, and FK relationships (behavioral detail). It does not cover auth needs or rate limits, but the key behavioral trait of non-execution is well-communicated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences) and front-loaded with the main action. Every sentence adds value: the first states the core function, the second adds key behavioral notes and tier info. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers basic functionality and behavioral guarantees but omits details about the output format (though schema includes format parameter) and error handling. For a tool with 4 parameters and no output schema, additional context on return value and failure modes would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents all parameters. The description adds no extra parameter-level meaning beyond the schema's existing descriptions, e.g., it repeats 'respects types' but does not elaborate on parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: generating schema-aware INSERT statements for development and testing. It uses specific verbs and resources ('Generate realistic, schema-aware INSERT statements') and distinguishes itself from siblings like query_sql (which executes queries) by explicitly stating 'Never executes'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for development and testing but does not explicitly compare to alternatives like generate_migration or query_sql. It lacks guidance on when not to use the tool, such as for production execution or schema changes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
impact_analysisARead-onlyIdempotentInspect
Analyze the blast radius of a proposed schema change: FK dependencies, affected views, row count, risk score. [ARCHITECT tier]
| Name | Required | Description | Default |
|---|---|---|---|
| intent | Yes | Describe the change: 'rename customer_id to account_id', 'drop column X' | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that it is an [ARCHITECT tier] tool and lists outputs, but does not detail authorization needs, rate limits, or side effects. It adds some context but lacks comprehensive behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with a tag, no wasted words. Every part earns its place, and it is front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description lists specific deliverables (FK dependencies, affected views, row count, risk score), which is fairly complete. It could be more explicit about output format, but it adequately informs the agent of expected results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%; both parameters have detailed descriptions. The tool description does not add significant meaning beyond the schema, so baseline is appropriate. The schema itself provides clear semantics for 'intent' and 'connection'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Analyze' and the resource 'blast radius of a proposed schema change', listing specific outputs like FK dependencies, affected views, row count, and risk score. It distinguishes itself from siblings such as 'describe_schema' or 'analyze_table' by focusing on impact analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for proposed schema changes but does not explicitly state when not to use it or suggest alternative tools like 'analyze_table' or 'describe_schema'. The context signals show many siblings, so guidance is needed but missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
issue_api_keyAInspect
Issue a fresh ta_data_* API key for your current tenant. Useful for pasting into /add-database or configuring a separate integration. The new key is tied to your existing plan tier. Rate-limited to 5 issuances per tenant per day.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| note | No | |
| plan | No | |
| user | No | |
| error | No | |
| usage | No | |
| key_id | No | Non-secret fingerprint of the issued key. Safe to log and surface in UI. |
| status | Yes | |
| api_key | No | One-time API key secret. Returned only on successful creation, in both structuredContent.api_key and content[0].text — save it immediately. The server stores only sha-256 of the secret; once this response is lost, the key is unrecoverable and must be rotated. Matches AWS IAM / Stripe / GitHub PAT one-time-reveal semantics. |
| tenantId | No | |
| retry_after_s | No | |
| credential_returned_once | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key tie to existing plan tier and rate limit (5 per tenant per day). Does not mention revocation or immediate usability, but for a simple issuance, these details are adequate given no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, front-loaded with the main action. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers what, why, and constraints. Could mention return value (e.g., how the key is provided), but for a simple tool with no output schema, it is fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema (0 params, 100% coverage). Baseline for 0 parameters is 4; description adds no param info, which is acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (issue a fresh API key), specifies the key type (ta_data_*), and targets the current tenant. It distinguishes from sibling tools as no other tool deals with API keys.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear use cases (pasting into /add-database or configuring integration) and constraints (tied to plan tier, rate-limited). Lacks explicit when-not-to-use or alternatives, but context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_connectionsARead-onlyIdempotentInspect
List every database connection registered for your tenant: name, id, dbType (postgres / mysql / mssql), createdAt. Flags duplicate names — only the first-added connection of a duplicate name is reachable by name. Returns nothing sensitive (no DSN, no credentials).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description covers key behavioral traits: it flags duplicate names (only first reachable by name), confirms no sensitive data is returned, and lists output fields. It does not mention side effects or idempotency, but for a read-only list, this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tight sentences: the first states the core purpose and output, the second adds a critical nuance about duplicates, the third reassures about sensitivity. No unnecessary words, front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description covers the essential output fields and duplicate handling. It could mention ordering or pagination, but the tool's simplicity means this is nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters, so the description's job is to document the output. It does so by detailing the fields (name, id, dbType, createdAt) and the duplicate behavior, adding value beyond the empty input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a clear action verb 'List' and specifies the resource 'database connections' with explicit fields (name, id, dbType, createdAt). It distinguishes itself from sibling tools like query_sql or test_connection by being a listing of all connections.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly tells when to use it (to see all connections) and provides a key caveat about duplicate names. However, it does not explicitly state when not to use it or suggest alternatives, missing a small opportunity for clarity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
optimize_queryARead-onlyIdempotentInspect
Suggest a rewritten, optimized version of a SQL query with explanations. Identifies sequential scans, missing indexes, sort spills, join inefficiencies, and suggests index DDL. [BUILD tier]
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | SQL query to optimize | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden. It discloses that the tool identifies specific issues and suggests index DDL, but does not clarify if it executes the query, requires permissions, or what side effects exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus a tier marker. Front-loaded with the main purpose. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lists what issues it finds but does not specify the output format (e.g., returned as text, JSON). With no output schema, more detail on the response structure would be helpful. Also lacks context on prerequisites like connection permissions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the schema already documents 'query' and 'connection'. The connection parameter has a detailed description. The tool description adds no further parameter-level information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action: suggest an optimized SQL query rewrite with explanations. It lists specific issues it identifies (sequential scans, missing indexes, etc.) and distinguishes from siblings like 'explain_query' and 'query_sql'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for SQL optimization but does not explicitly state when to use this tool vs alternatives like 'explain_query' or 'query_sql'. The '[BUILD tier]' gives a hint but no direct guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pii_scanARead-onlyIdempotentInspect
Sweep string columns across tables for common PII patterns (email, SSN, credit card, phone, JWT, bearer tokens). Heuristic-only — not a compliance guarantee. [BUILD tier]
| Name | Required | Description | Default |
|---|---|---|---|
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. | |
| max_tables | No | Max tables to sample (default 25, cap 50) | |
| sample_rows | No | Rows sampled per table (default 20, cap 100) |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the transparency burden. It discloses the heuristic nature and non-compliance guarantee, but omits details on side effects (e.g., read-only), required permissions, or potential impact on data. The [BUILD tier] hint is vague.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of two sentences with no fluff. It front-loads the core purpose and adds important caveats efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description does not mention output format or how results are returned. Given the lack of output schema, this omission affects completeness, especially when compared to sibling tools that may have similar scan-like functions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes parameters (connection, max_tables, sample_rows) adequately. The tool description adds no extra meaning beyond the schema, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action ('Sweep string columns across tables') and specific PII patterns (email, SSN, credit card, etc.), distinguishing it from sibling tools like data_profile or detect_anomalies that have broader scopes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks explicit guidance on when to use this tool versus alternatives. It does not mention preconditions, when not to use it, or how it compares to other tools for data scanning or anomaly detection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_firewallBDestructiveInspect
Manage per-connection SQL rules: block dangerous patterns, require WHERE on large tables, log PII access. [ARCHITECT tier]
| Name | Required | Description | Default |
|---|---|---|---|
| sql | No | SQL to test against rules | |
| action | Yes | What to do | |
| message | No | Message shown when rule triggers | |
| pattern | No | Regex pattern to match | |
| rule_name | No | Rule name | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. | |
| block_action | No | Action when matched (default block) |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description mentions rule behaviors (block, require WHERE, log PII) but lacks details on side effects, authorization requirements, or state changes. Given no annotations, more behavioral context would be helpful, but the description provides moderate insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-loaded with purpose. Each word is informative, and the structure is clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite 7 parameters and no output schema, the description is minimal. It does not explain return values, how rules are applied, or the impact of actions. For a security tool, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no additional meaning to parameters; it only summarizes the tool's purpose. No contradiction or extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool manages per-connection SQL rules with specific actions (block, require WHERE, log PII). It uses a specific verb 'manage' and resource 'SQL rules', distinguishing it from siblings like query_sql which executes queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like query_sql or explain_query. The phrase 'per-connection SQL rules' implies security context but does not state exclusions or conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_historyBRead-onlyInspect
Return recent queries executed through ThinAir with timing, row counts, and status. [BUILD tier]
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of recent queries (default 20, max 100) | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description only mentions return content but omits side effects, authentication, rate limits, or recency definition. Limited behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence plus tier tag, with no wasted words. Front-loaded with purpose and key details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with 2 optional params and no output schema. Mentions return fields (timing, row counts, status) but lacks structure details. Adequate but not fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds no extra meaning beyond the schema's parameter descriptions. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb (return), resource (recent queries), and specifics (timing, row counts, status). Distinguishes from siblings like query_sql or saved_queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when or when not to use this tool, nor comparisons to alternatives. The '[BUILD tier]' tag weakly implies development context but is insufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_sqlARead-onlyIdempotentInspect
Execute a read-only SQL query against the target connection. ONLY SELECT / WITH / EXPLAIN permitted. Write dialect-appropriate SQL for the connection's engine — use PostgreSQL syntax for postgres connections (SELECT NOW(), LIMIT, ILIKE), T-SQL for mssql (SELECT GETDATE(), TOP N, LIKE), MySQL for mysql (SELECT NOW(), LIMIT). Response meta includes connection + dialect so you know which syntax worked; reuse that dialect in follow-up calls. Default LIMIT 100 unless the user asks for all rows.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max rows to return (default 100, max 1000) | |
| query | Yes | SQL SELECT query to execute | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While the description states the tool is read-only and lists allowed statements, it lacks details on error handling, timeout, resource impact, or authentication requirements. With no annotations, more behavioral disclosure (e.g., speed, concurrency limits) would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (~100 words) and well-organized: core purpose, constraints, dialect rules, response info, and default behavior. Each sentence serves a clear purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, constraints, dialect, connection resolution, and limit behavior. However, without an output schema, the description does not describe the structure of the query result (e.g., rows, columns, pagination). This is a minor gap given the tool's core function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All three parameters are covered in the schema (100% coverage), but the description adds significant value: for `query` it adds dialect guidance and allowed statements; for `limit` it clarifies default and max; for `connection` it provides a detailed resolution strategy referencing `list_connections`. This exceeds the schema's documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the purpose: execute a read-only SQL query against a target connection, explicitly limiting to SELECT/WITH/EXPLAIN. It differentiates from siblings like `explain_query` by focusing on general query execution with specific dialect guidance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use: only for read-only queries with allowed statements. Offers dialect-specific syntax examples for PostgreSQL, T-SQL, and MySQL. Instructs to reuse the returned dialect in follow-up calls, and sets default LIMIT 100 with option for all rows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
quotaARead-onlyInspect
Check current API usage, daily limit, plan name, and upgrade options.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| plan | Yes | |
| used | Yes | |
| limit | Yes | |
| reset_at | No | |
| upgrade_url | No | |
| schema_version | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description indicates a read-only operation ('check'), but lacks details about side effects, permission requirements, or rate limits. Since no annotations are present, the description carries the full burden and provides only minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the tool's purpose without unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the essential functionality—checking usage, limit, plan, and upgrade options. However, without an output schema, more detail about the response format would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and the input schema covers 100% of nothing. The description does not need to add parameter information, and the baseline for 0 parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool checks API usage, daily limit, plan name, and upgrade options. It uses a clear verb 'check' and specifies the resources, distinguishing it from the database-focused sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It does not mention any prerequisites, such as authentication requirements, or compare with other tools like 'issue_api_key' that might involve API management.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
saved_queriesADestructiveInspect
Manage your personal library of reusable SELECT queries. action=save stores a query by name; action=run executes a saved query; action=list returns all your saved queries; action=delete removes one. [BUILD tier]
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Query ID (alternative to name for run/delete) | |
| sql | No | SQL to store (required for action=save) | |
| tag | No | Filter by tag (action=list) | |
| name | No | Query name (save/run/delete) | |
| tags | No | Tags (action=save) | |
| action | Yes | save | run | list | delete | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. | |
| description | No | Freeform description (action=save) |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the tool is for SELECT queries and lists four actions, but lacks details on mutation behaviors (e.g., overwrite on save, error conditions, permissions, or scope). The '[BUILD tier]' tag hints at environment but is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: a clear purpose statement and an action summary. Every sentence adds value, and the front-loaded structure quickly conveys the tool's core offering.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 8 parameters and no output schema, the description covers the four actions adequately but omits context on how to use parameters like 'tag' or 'id' vs 'name', and does not explain the 'connection' parameter's role (though schema covers it). Missing edge cases and usage patterns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description does not add meaning beyond the schema. It reiterates the action-to-function mapping but provides no new semantics for parameters like 'connection' or 'tags'. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool manages a personal library of reusable SELECT queries and enumerates four distinct actions (save, run, list, delete). This differentiates it from sibling tools like 'query_sql' or 'analyze_table' by specifying it's for reusable SELECT queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains each action (save, run, list, delete) and their purposes, providing clear context for when to use each. However, it does not explicitly contrast with alternatives or state when not to use this tool, leaving minor ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
show_locksARead-onlyInspect
List active sessions + blocking locks. Uses the dialect's own system view — pg_stat_activity on postgres, information_schema.processlist on mysql, sys.dm_exec_requests joined with sys.dm_tran_locks on mssql. No dialect arg needed — inferred from the connection. Required privileges (per dialect): postgres — pg_read_all_stats role membership (or be the role that owns the queries; otherwise you only see your own session); mysql — PROCESS privilege; mssql — VIEW SERVER STATE. If the role lacks the privilege the tool returns a clean Query blocked by security policy error rather than partial data — grant the role above and retry. RDS/Aurora/Azure managed PostgreSQL: pg_read_all_stats is grantable but not on by default. [BUILD tier]
| Name | Required | Description | Default |
|---|---|---|---|
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses dialect-specific backend views, permission requirement, and no dialect arg needed. Lacks details on output format or pagination, but adequate for a diagnostic read-only tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three focused sentences with front-loaded purpose, no extra words. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, and description does not hint at return format or pagination. Adequate for a simple diagnostic but incomplete for full agent autonomy.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already fully describes the connection parameter. Description adds no further parameter semantics, achieving baseline for 100% coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'List' and distinct resource 'active sessions + blocking locks'. Differentiates from siblings like query_sql by focusing on locking diagnostics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Purpose is implied but no explicit when-to-use or when-not-to-use compared to alternatives. Could mention not for general querying.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggest_queriesBRead-onlyIdempotentInspect
Generate schema-aware query suggestions with ready-to-run SQL. Great for exploring unfamiliar databases or finding useful queries.
| Name | Required | Description | Default |
|---|---|---|---|
| context | No | Topic or goal to focus suggestions | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It states the tool produces 'schema-aware query suggestions' but does not detail any side effects, permissions, or limits. The description is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, front-loading the primary purpose. No redundant information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should explain what the tool returns (e.g., format of suggestions, number of suggestions). It only mentions 'ready-to-run SQL' without details, leaving gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the parameters are well-documented in the input schema. The description adds value by reinforcing the 'connection' parameter's guidance to call list_connections, but otherwise does not extend beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates schema-aware query suggestions with ready-to-run SQL, and specifies its utility for exploring unfamiliar databases or finding useful queries. This distinguishes it from sibling tools like query_sql or explain_query.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for exploration and discovery but does not explicitly state when not to use it or suggest alternatives. While the context is clear, there is no exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
test_connectionARead-onlyInspect
Ping a connection (SELECT 1) and return server version + latency. Fast way to confirm credentials and network path without running describe_schema.
| Name | Required | Description | Default |
|---|---|---|---|
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden. It discloses the exact SQL being executed (SELECT 1) and the nature of the return (server version + latency). It implies a lightweight, fast operation, which is accurate. Could be more explicit about side effects (none) but adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the key action and outcome. No extraneous information; each sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a simple ping tool with one parameter and no output schema, the description covers the purpose, output expectations, and comparative usage with describe_schema. It could mention the format of latency, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage for the single parameter is 100%, and the schema's own description is detailed. The tool-level description does not add additional meaning about the parameter beyond what the schema already provides. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action ('Ping a connection'), the specific SQL operation ('SELECT 1'), and the outputs ('server version + latency'). Immediately distinguishes itself from describe_schema, a sibling tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly frames the tool as a 'fast way to confirm credentials and network path without running describe_schema', giving clear context for when to prefer this tool over a related alternative. Does not outright state when not to use, but the implication is strong enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
watch_tableBRead-onlyInspect
Monitor a table's row count and latest record. Compares to previous snapshot to show changes. Built-in scheduler. [ARCHITECT tier]
| Name | Required | Description | Default |
|---|---|---|---|
| table | Yes | Table to monitor | |
| column | No | Date column to track latest | |
| condition | No | What to watch: 'new rows', 'row count drops' | |
| connection | No | Target connection name from this tenant's inventory. Call `list_connections` to see every name + dialect, then match semantically to the user's intent (e.g. 'analytics' → a connection named `*-analytics-*`; 'prod' → a connection with `prod-` prefix). If the user didn't specify, use the tenant's default (first added). Do not invent names — resolve from `list_connections` output. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | |
| meta | No | |
| display | No | |
| summary | No | |
| insights | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It mentions a built-in scheduler and comparison with previous snapshots, but fails to disclose whether the tool is read-only, whether it modifies data, what permissions are required, or any rate limits. Critical safety information is missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exceptionally concise at two clear sentences plus a tier label. The first sentence front-loads the primary purpose, the second adds the key differentiating feature (comparison to previous snapshot), and the tier is a helpful qualifier. No unnecessary words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters, no output schema, and no annotations, the description is insufficient. It does not explain the output format (e.g., does it return a snapshot object or a diff?), the behavior of the scheduler (how often does it run?), or the meaning of 'latest record'. The agent would lack key context to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The tool description does not elaborate on any parameter beyond what the schema already provides. It does not add new context or examples for parameters like 'condition' or 'connection', so it meets the baseline without exceeding it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: monitoring a table's row count and latest record, comparing to previous snapshot, and having a built-in scheduler. This distinctly differentiates it from sibling tools like analyze_table or data_profile, which focus on one-time analysis rather than ongoing monitoring with change detection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like detect_anomalies or data_profile. There are no explicit 'use this when' or 'instead of' statements, leaving the agent to infer the context from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!