Census
Server Details
Census MCP — U.S. Census Bureau housing-relevant APIs.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-census
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 10 of 10 tools scored. Lowest: 3.3/5.
Most tools are clearly distinct (census_acs, census_building_permits, census_homeownership, census_housing_starts) but ask_pipeworx overlaps with many of them by acting as a catch-all query interface. The memory tools (forget, recall, remember) are distinct from census tools but feel out of place. Overall, ambiguity is moderate.
Naming is inconsistent: census tools use snake_case with 'census_' prefix (e.g., census_acs, census_building_permits) but memory tools use imperative verbs (forget, recall, remember) and ask_pipeworx uses a different verb style. discover_tools also breaks pattern. Mixing styles reduces predictability.
10 tools is a reasonable count overall, but the inclusion of memory tools and discover_tools suggests this server might be a general-purpose assistant rather than focused solely on Census data. The core Census domain has 5 tools, which feels slightly thin, while extra tools add breadth but dilute focus.
For Census data, the tools cover ACS, building permits, homeownership, and housing starts, but lack other important datasets like population estimates, economic indicators, or decennial census data. The memory tools add utility but are orthogonal. The ask_pipeworx tool compensates somewhat by promising to fetch any data, but its actual capabilities are vague.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the tool auto-selects sources and fills arguments, but lacks details on data sources or limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise and front-loaded with purpose, followed by examples. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 param, no nested objects, no output schema), description is adequate but could mention scope of data sources or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds natural language usage context but does not exceed what the schema provides for the single parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts plain English questions and returns answers by selecting the best data source, with concrete examples distinguishing it from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage via 'just describe what you need' but does not specify when not to use it or alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
census_acsAInspect
Search American Community Survey data by geography and variable code (e.g., B25077_001E for median home value). Returns housing ownership rates, rental costs, vacancy rates, and home values.
| Name | Required | Description | Default |
|---|---|---|---|
| year | No | Survey year (default 2022). ACS 5-year estimates available from 2009 onward. | |
| _apiKey | Yes | Census API key | |
| geography | Yes | Geographic level and filter using Census "for" syntax (e.g., "state:06" for California, "county:*" for all counties, "state:*" for all states, "county:037&in=state:06" for LA County). | |
| variables | Yes | Comma-separated variable codes to retrieve (e.g., "NAME,B25001_001E,B25077_001E"). Always include NAME for place names. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It explains the data source and common variables but does not disclose behavioral traits like rate limits, authentication requirements beyond the API key, or whether the tool is read-only. It adds context beyond schema but lacks detail on API behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with a clear purpose and helpful examples. It is front-loaded with the main function and then provides variable codes. Could be slightly more structured with explicit parameter details, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters, no output schema, and no annotations, the description covers the core purpose and key variable codes. It lacks information on return format or pagination, but for a simple data retrieval tool, it is largely complete. Sibling differentiation is implicit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context about variable codes and geography examples, which is helpful but doesn't go beyond what a well-documented schema provides. No new meaning for parameters like _apiKey beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets ACS 5-year data from the U.S. Census Bureau, specifies it's for housing statistics, and lists common variable codes. It differentiates from sibling tools like census_building_permits or census_homeownership by focusing on core housing data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for housing-related ACS data but does not explicitly say when to use this over other census tools. It provides helpful variable codes but lacks direct comparison to sibling tools for exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
census_available_datasetsAInspect
Discover Census datasets and their variables. Returns dataset names, descriptions, and variable codes (e.g., B25001_001E) for querying with other census tools.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | No | Census API key (not required for this endpoint but accepted for consistency) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It clarifies that no API key is needed (a key behavioral trait), and that the endpoint is for discovery, not data retrieval. It doesn't disclose response format or limits, but given the simplicity, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, no wasted words. Each sentence serves a distinct purpose: action, requirement, and utility. Information is front-loaded with the verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero required parameters, no output schema, and a simple list operation, the description covers the essential purpose and behavioral aspects. It could mention that the output is a list of dataset objects, but this is not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add any parameter-specific meaning beyond what the schema provides. The _apiKey parameter is already well-documented in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('List') and a clear resource ('Census Bureau datasets'). It explains the tool's utility for discovering dataset identifiers, descriptions, and variables, distinguishing it from data-querying siblings like census_acs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'No API key required' and positions the tool as a prerequisite step before querying specific data. However, it does not mention when not to use it or list alternative tools explicitly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
census_building_permitsBInspect
Check monthly building permits for new residential construction by geography. Returns count of authorized privately-owned housing units and construction activity trends.
| Name | Required | Description | Default |
|---|---|---|---|
| time | Yes | Time period (e.g., "2024-01" for January 2024, "from+2023-01+to+2024-01" for a range). | |
| _apiKey | Yes | Census API key | |
| variables | Yes | Comma-separated variables (e.g., "PERMIT" for total permits, "PERMIT_1UNIT" for single-family). Use census_available_datasets to discover variables. | |
| category_code | No | Category filter (e.g., "TOTAL" for total, "1UNIT" for single-family). Optional. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must cover behavioral traits. It correctly indicates it is a data retrieval tool (not destructive), but does not disclose any rate limits, data freshness, or potential errors. However, it does clarify it tracks 'new privately-owned housing units authorized by building permits,' which sets expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that clearly state purpose and source. No redundant or filler content. Front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 4 parameters (100% schema coverage) and no output schema, the description adequately states what data is returned (monthly building permits data) but does not mention response format or limitations. It is sufficient for basic understanding but could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so each parameter is already described in the schema. The description adds no extra meaning beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it gets monthly building permits data from the Census Bureau residential construction survey. The description specifies the resource (Census Bureau building permits) and the scope (tracks new privately-owned housing units authorized by building permits), distinguishing it from siblings like census_acs or census_homeownership.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like census_housing_starts or census_available_datasets. Does not mention prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
census_homeownershipAInspect
Check quarterly homeownership rates by geography. Returns percentage of owner-occupied housing units from the Housing Vacancy Survey.
| Name | Required | Description | Default |
|---|---|---|---|
| time | Yes | Time period in YYYY-QN format (e.g., "2024-Q1" for Q1 2024). Use "from+2020-Q1+to+2024-Q1" for a range. | |
| _apiKey | Yes | Census API key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must carry behavioral transparency. It does not disclose any side effects, authorization requirements, or return format details. However, the tool appears to be a read-only data query, which is implied but not stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the purpose and metric. No redundant information; every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has only 2 parameters with 100% schema coverage and no output schema or nested objects. The description is minimally complete: it explains what the tool does and the metric, but lacks details like response format or error conditions, which would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the description does not need to add param details. It adds no extra meaning beyond the schema, which is sufficient for a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns quarterly homeownership rates from a specific survey, with the exact metric defined as the percentage of owner-occupied units. This is a specific verb+resource combination that distinguishes it from sibling tools like census_acs or census_building_permits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for accessing homeownership data from HVS, but does not explicitly state when to use it over other census tools or provide alternatives. No exclusions or context about data availability are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
census_housing_startsBInspect
Get residential construction pipeline by geography: new starts, units under construction, and completed units. Returns housing supply and activity trends.
| Name | Required | Description | Default |
|---|---|---|---|
| time | Yes | Time period (e.g., "2024-01" for January 2024). | |
| region | No | Census region filter (e.g., "NE" for Northeast, "MW" for Midwest, "S" for South, "W" for West). Optional. | |
| _apiKey | Yes | Census API key | |
| variables | Yes | Comma-separated variables (e.g., "STARTS" for housing starts, "UNDER_CONSTRUCTION", "COMPLETIONS"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must cover behavioral traits. It mentions it gets data from the Census Bureau but does not disclose any API rate limits, required authentication beyond the API key, or potential response size. The description is neutral and not misleading.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise and front-loaded with the action. It includes key data types but could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has 4 parameters, no output schema, and no annotations, the description is adequate but could mention output format or pagination. It is minimally viable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description does not add new meaning beyond listing the data types. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it gets residential construction data from the Census Bureau, specifying three data types: housing starts, units under construction, and completions. It distinguishes itself from sibling tools like census_building_permits and census_homeownership by focusing on housing starts and construction activity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving housing construction data but does not explicitly state when to use this tool versus alternatives like census_building_permits. It provides no guidance on prerequisites or data frequency.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must cover behavioral traits. It discloses that it searches a catalog and returns relevant tools with names and descriptions, which is a read-only, non-destructive operation. The description implies it is a search tool, but does not mention any side effects or access restrictions. The lack of annotations raises the burden, but the description adequately conveys the search behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each earning its place: first states the core action, second states the output, third provides strategic usage advice. It is front-loaded and no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema, no nested objects) and the context of many sibling tools, the description is complete. It explains what it does, when to use it, and what it returns. No missing information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by explaining that the query is a 'Natural language description of what you want to do' with examples, and for limit it provides default and max values. This enhances understanding beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search' and the resource 'Pipeworx tool catalog', and specifies the purpose 'by describing what you need. Returns the most relevant tools with names and descriptions.' This distinguishes it from sibling tools, which are specific data tools, as this is a meta-tool for discovering other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear guidance on when to use it (as a first step) and the context (many tools available), and implies it is an alternative to manually searching through sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It states deletion but does not mention whether deletion is permanent, requires confirmation, or affects related memories. This is insufficient for a destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, efficient and front-loaded. However, it could benefit from a brief behavioral note (e.g., permanence) without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is too brief. For a single-parameter deletion tool, it should clarify irreversibility, scope (e.g., current user only), and any confirmation needed. The current description leaves ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description does not add meaning beyond the schema's 'key' parameter description. The parameter description in the schema is adequate, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Delete a stored memory by key' uses a specific verb ('Delete') and resource ('stored memory by key'), clearly distinguishing it from siblings like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool vs alternatives, but its name and purpose imply deletion. The sibling tools 'recall' and 'remember' suggest complementary operations, but no guidance is provided on prerequisites or scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description must disclose behavior. It states retrieval and listing, but doesn't mention any side effects or limitations (e.g., memory persistence, deletion behavior).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, clear and front-loaded. First sentence states purpose and usage, second sentence provides usage context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with one optional parameter and no output schema, description covers key usage. Could add info about return format or memory lifecycle, but adequate for current complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers parameter well (100% coverage). Description adds context that omitting key lists all, which schema implies but description makes explicit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'retrieve' and resource 'memory', specifies two modes (by key or list all) and distinguishes from 'remember' and 'forget' siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (retrieve context saved earlier) and how to list all (omit key). Could mention when not to use (e.g., for new information, use remember).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that authenticated users get persistent memory and anonymous sessions last 24 hours, which is important behavioral context. It does not mention any destructive effects or rate limits, but these are not likely needed for a memory store.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each adding essential information: purpose, typical use cases, and persistence behavior. No wasted words; information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple input schema, the description is complete enough. It explains what the tool does, when to use it, and a key behavioral detail (persistence). It could mention return value or error cases, but not strictly necessary for this simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds minimal parameter information (key and value) but does not elaborate beyond what the schema provides. Examples are given in schema descriptions, not in the tool description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, with a specific verb ('store') and resource ('session memory'). It distinguishes from siblings like 'recall' (retrieve) and 'forget' (delete) by its purpose of saving data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to use this tool for saving intermediate findings, user preferences, or context across tool calls. It provides context but does not explicitly state when not to use it or mention alternatives, though siblings are implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!