Airtable
Server Details
Airtable MCP Pack — wraps the Airtable REST API v0
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-airtable
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 10 of 10 tools scored. Lowest: 2.9/5.
Airtable tools are distinct and well-described, but the set includes three memory tools (remember, recall, forget) and two generic Pipeworx tools (ask_pipeworx, discover_tools) that serve a completely different purpose, creating confusion about which tools belong to the Airtable server's core functionality.
Airtable tools follow a consistent 'airtable_' prefix with verb_noun pattern, but the inclusion of ask_pipeworx, discover_tools, remember, recall, and forget breaks the naming convention entirely, mixing general-purpose tools with domain-specific ones.
10 tools is a reasonable number, but about half are unrelated to Airtable (memory and generic query tools), making the tool count feel inflated for the actual domain coverage. A pure Airtable server would have 5-7 tools.
The Airtable tools cover core operations (list bases, get schema, CRUD on records), but lack update and delete record operations. The memory and generic tools address unrelated needs, so completeness for the Airtable domain is lacking.
Available Tools
10 toolsairtable_create_recordCInspect
Add a new record to an Airtable table with specified field values. Returns the created record ID and full record data.
| Name | Required | Description | Default |
|---|---|---|---|
| baseId | Yes | Airtable base ID | |
| fields | Yes | Object of field name/value pairs to set on the new record | |
| _apiKey | Yes | Airtable personal access token | |
| tableIdOrName | Yes | Table ID or name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It only states 'create a new record' without mentioning side effects (e.g., appending a row), permission requirements, rate limits, or that the _apiKey must be valid and have write access.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence, which is concise and front-loaded. However, it could be slightly more informative without becoming wordy, such as mentioning required fields or authentication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 required parameters, no output schema, and complex nested objects, the description is insufficient. It lacks details on return value, error conditions, and behavior of the 'fields' object.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already explains each parameter. The description adds no extra meaning beyond the schema, but given full coverage, a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the verb 'create' with the resource 'record in an Airtable table', which clearly states what the tool does. However, it does not differentiate from siblings like airtable_get_record or airtable_list_records, missing a chance to clarify that this tool is for writing new data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is given on when to use this tool versus alternatives. For example, it doesn't mention that this tool requires a personal access token or that it should be used for adding new data, not updating existing records.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
airtable_get_base_schemaBInspect
Get the structure of an Airtable base—all tables, field names, field types, and configurations. Use first to understand available data before querying or creating records.
| Name | Required | Description | Default |
|---|---|---|---|
| baseId | Yes | Airtable base ID | |
| _apiKey | Yes | Airtable personal access token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It only states what the tool does but does not disclose any behavioral traits such as rate limits, authentication requirements (beyond schema), or whether the schema is read-only. It adds no value beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that clearly states the purpose. It is concise, but could be slightly more structured or include a brief note on the return format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema and no annotations, the description should provide more context about what the schema response looks like or any prerequisites. It is incomplete for a schema retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters. The description does not add additional meaning beyond what the schema provides, earning a baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets the schema (tables and fields) for an Airtable base. It uses a specific verb and resource, but doesn't differentiate from siblings like airtable_list_bases, though the purpose is distinct enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use the tool (when you need base schema), but provides no explicit guidance on when not to use it or alternatives. It is acceptable but lacks exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
airtable_get_recordAInspect
Retrieve a single record by ID from an Airtable table. Returns all field values and record metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| baseId | Yes | Airtable base ID | |
| _apiKey | Yes | Airtable personal access token | |
| recordId | Yes | Record ID (e.g., recXXXXXXXXXXXX) | |
| tableIdOrName | Yes | Table ID or name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It correctly describes the operation as a read (get) but does not disclose potential side effects (none expected) or authentication requirements beyond what is in the schema. Since annotations are empty, a 3 is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single sentence, front-loaded with the purpose. It is concise but could potentially include a quick example or note about the record ID format (though it's in the schema). No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a simple single-record retrieval with no output schema, the description is minimally complete. It covers the action and required inputs but lacks details on what is returned (e.g., field values) or error cases. Slightly above bare minimum.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds no additional parameter details beyond what is in the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get a single record by ID from an Airtable table', which is a specific verb+resource combination. It distinguishes itself from siblings like airtable_list_records (which retrieves multiple records) and airtable_create_record (which creates).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (when needing a single record by ID) but does not explicitly mention when not to use or alternatives. Given siblings like airtable_list_records, the description lacks guidance on choosing between them.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
airtable_list_basesAInspect
List all Airtable bases you have access to. Returns base IDs, names, and workspace info. Use to explore available databases.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | Airtable personal access token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavior. Description states it lists 'accessible' bases but doesn't detail read-only nature, pagination, or error conditions. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no fluff. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple (1 param, no output schema, no siblings of same type). Description covers core purpose. Could mention no pagination or rate limits, but not essential for a basic list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so schema already documents the _apiKey parameter. Description adds no further meaning beyond what schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists all bases accessible to the authenticated user. Verb 'list' and resource 'bases' are specific, and 'accessible' clarifies scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. However, as a listing tool, its purpose is self-evident and there is no sibling with similar function.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
airtable_list_recordsBInspect
Fetch records from an Airtable table with optional filtering by formula (e.g., "{Status} = 'Done'"). Returns record IDs, field values, and metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| baseId | Yes | Airtable base ID (e.g., appXXXXXXXXXXXX) | |
| _apiKey | Yes | Airtable personal access token | |
| maxRecords | No | Maximum number of records to return (default 100) | |
| tableIdOrName | Yes | Table ID or name | |
| filterByFormula | No | Airtable formula to filter records (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Fails to disclose behavior like default maxRecords=100, whether records are sorted, or if there is pagination. Does not mention that the tool is read-only (though list implies read).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very concise single sentence, no unnecessary words. Front-loaded with verb and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 5 parameters (3 required) and no output schema. Description fails to mention default maxRecords, that results are paginated, or the return format. Lacks context on how to construct formulas or handle errors.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for all 5 parameters. The description adds no extra detail beyond the schema. Baseline is 3; the description does not add value, so a slight decrease to 4? Actually baseline 3, and no added value, so score 3? Wait, schema coverage is high, so baseline 3. The description mentions 'optional formula filtering' which aligns with filterByFormula but adds no new semantics. Score 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'List records' and the resource 'Airtable table'. Also mentions optional filtering via formula. However, it does not distinguish from sibling tools like 'airtable_get_record' or 'airtable_create_record' explicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Does not mention prerequisites, such as needing the API key, base ID, or table ID, which are required. Also does not explain when to use filtering or limitations like pagination.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It transparently explains that the tool selects the right data source and fills arguments, indicating autonomous behavior. This goes beyond simple 'ask a question' and informs the agent of internal delegation. However, it does not disclose potential side effects, rate limits, or data source constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded with the key purpose. It includes examples for clarity. One minor improvement could be tighter phrasing, but overall it is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, no output schema, no annotations), the description is reasonably complete. It explains how the tool works and provides examples. However, it could be more complete by noting any limitations on question types or data source availability.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single 'question' parameter, which is well-described in the schema. The description adds context by explaining the parameter should be a natural language request, but does not add significant meaning beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts a natural language question and returns an answer from the best data source. It explains that the tool internally selects tools and fills arguments, distinguishing it from siblings that require direct schema or tool knowledge. However, it doesn't explicitly name specific sibling tools or contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides examples of when to use the tool (asking questions in plain English) but does not offer explicit guidance on when not to use it or alternatives. It implies usage for any natural language query, but given siblings like 'airtable_create_record' or 'remember', it could clarify that this tool is for querying rather than creating records or storing memories.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool searches by natural language and returns relevant tools with names and descriptions, which is sufficient for a search tool. However, it does not specify whether the search is purely semantic or keyword-based, or if results are ranked by relevance, leaving some minor behavioral ambiguity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first explains the core functionality, the second provides critical usage guidance. Every sentence is purposeful, no fluff, and the most important call-to-action ('Call this FIRST') is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema, no nested objects) and the presence of sibling tools, the description is nearly complete. It covers purpose, usage guidance, and basic behavior. A minor gap: it doesn't explain what happens if no tools match the query, but that is a minor omission for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%: both 'query' and 'limit' have descriptions. The description adds value by explaining that 'query' is a natural language description and that the tool returns relevant tools, which goes beyond the schema's technical description. It does not add new details for 'limit' beyond what the schema says, but the high coverage makes this acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Search') and resource ('Pipeworx tool catalog'), and it explicitly distinguishes the tool's purpose: to find relevant tools by describing needs, especially when many tools are available. This differentiates it from sibling tools like airtable_list_bases or ask_pipeworx.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises to call this tool FIRST when 500+ tools are available, providing a clear when-to-use directive. It also implies alternatives by focusing on discovery rather than direct record manipulation, and the sibling context shows no other search/discovery tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full burden. It fails to disclose behavioral traits such as whether the deletion is permanent, any side effects, or if the operation is idempotent. 'Delete' implies mutation, but no further context is given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 5 words, front-loaded with the action and resource. Every word earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (1 required param, no output schema), the description is minimal. It lacks details on return behavior (e.g., success confirmation, error messages) and edge cases (e.g., deleting non-existent key).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds 'by key' which reinforces the parameter purpose, but does not add meaning beyond the schema (e.g., key format constraints, case sensitivity).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Delete', the resource 'a stored memory', and the method 'by key'. It effectively distinguishes from sibling tools like 'remember' (create) and 'recall' (read).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when a memory needs to be removed, but does not explicitly state when to use it versus alternatives like editing a memory (if such tool existed) or conditions under which deletion fails (e.g., key not found).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that omitting key lists all memories and that it retrieves from session or previous sessions. However, it doesn't mention side effects, persistence limits, or whether retrieval is read-only (no destructive hint).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, efficient and no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 optional param, no output schema, no annotations), description adequately explains behavior. Could mention return format or memory scope more explicitly, but sufficient for a simple retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (1 param described). Description adds context that omitting key lists all memories, which goes beyond schema's description. This is helpful for understanding behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories, with a specific verb (retrieve/list) and resource (memory). It distinguishes from sibling 'remember' and 'forget' tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says to use this tool to retrieve context saved earlier, implying when to use it. However, it does not explicitly state when not to use it or mention alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavior. It mentions persistence differences ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which is valuable. However, it does not clarify whether storing a key overwrites existing values, or any rate limits or size constraints, leaving some behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, no redundancy. Each sentence adds value: first states the action, second advises usage, third notes persistence. Front-loaded with core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 simple params, no output schema), the description is sufficiently complete. It covers purpose, usage context, and behavioral nuance (persistence). No output schema is needed for a write-only tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters. The description adds context ('key-value pair', 'findings, addresses, preferences, notes') but does not significantly enhance meaning beyond the schema's parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('store a key-value pair'), the resource ('session memory'), and the context ('save intermediate findings, user preferences, or context'). It distinguishes from siblings like 'recall' (retrieval) and 'forget' (deletion) by focusing on storage.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this to save intermediate findings, user preferences, or context across tool calls', providing clear guidance on when to use. However, it does not explicitly state when not to use or mention alternatives like 'forget' or 'recall', but the sibling names imply their roles.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!