Bls
Server Details
BLS MCP — Bureau of Labor Statistics public data API (v2)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-bls
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 9 of 9 tools scored. Lowest: 3.3/5.
Most tools have clearly distinct purposes (BLS data retrieval, search, memory, and meta-tool discovery). However, 'ask_pipeworx' overlaps with the BLS-specific tools since it can answer BLS queries, and 'discover_tools' is a meta-tool that could cause confusion about when to use it versus 'ask_pipeworx'.
BLS tools follow a 'bls_verb' pattern (bls_get_series, bls_latest, etc.), but memory tools use single verbs (forget, recall, remember) and the meta-tools (ask_pipeworx, discover_tools) break the pattern. This mix of conventions is readable but inconsistent.
9 tools is reasonable for a BLS-focused server with added memory and meta-capabilities. Not too many or too few, though the inclusion of general memory tools slightly expands the scope.
The BLS surface covers core operations: search, get series, get latest, list popular. However, there's no update or delete for user data (only memory has forget), and there's no tool for bulk downloads or advanced filters, which are common BLS needs.
Available Tools
9 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behaviors: uses 'best available data source', automatically picks tool and fills arguments, returns result. Clearly states it's a natural language interface that abstracts away tool selection. No contradictions with annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: first states core function, second explains automation, third provides examples. No wasted words. Front-loaded with key action. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple input schema (one param), no output schema, and no annotations, the description sufficiently covers how to use the tool. Examples provide clarity. Could mention potential limitations (e.g., scope of data sources, latency) but is complete for its purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (only one parameter 'question' with clear description). The description adds meaning beyond the schema by explaining how the question is used (to select tool and fill arguments) and providing usage examples. This adds valuable context that the schema alone does not convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: answer natural language questions by selecting the best data source. It uses specific verbs ('Ask', 'picks', 'fills', 'returns') and resource ('answer from the best available data source'). Distinguishes from siblings by emphasizing natural language interaction and automatic tool selection, which none of the sibling tools (e.g., bls_get_series, bls_search) offer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'no need to browse tools or learn schemas — just describe what you need.' Provides concrete examples (trade deficit, adverse events, 10-K filing). Does not explicitly state when not to use this tool or mention alternatives, but the examples and context imply it's for general-purpose questions that can be answered by any of the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bls_get_seriesAInspect
Fetch historical time series data for employment, inflation, wages, productivity, or housing. Returns dated data points with values. Provide series ID (e.g., "PAYEMS" for total nonfarm employment).
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | No | BLS registration key (optional, increases rate limits) | |
| end_year | No | End year (e.g., "2024"). Default: current year. | |
| series_id | Yes | BLS series ID (e.g., "LNS14000000" for unemployment rate). For multiple series, comma-separate them (e.g., "LNS14000000,CES0000000001"). | |
| start_year | No | Start year (e.g., "2023"). Default: current year minus 2. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. The description does not mention rate limits, data freshness, whether the tool is read-only, or any side effects. It lacks critical behavioral context for a data retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the core purpose and then listing supported categories. Every sentence adds value, and there is no waste. It is concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of output schema and annotations, the description should explain what the response looks like or note pagination/format. It does not. However, for a simple data retrieval tool with well-known BLS series, the description is minimally complete but has gaps in behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds context about supported series types (employment, CPI, etc.) but does not add meaning beyond what the schema provides for each parameter. The description's value is limited to summarizing the tool's domain.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs and resources: 'Get time series data from the Bureau of Labor Statistics for one or more series.' It clearly states the tool retrieves time series data for multiple economic indicators, and the sibling context includes related BLS tools (e.g., bls_latest, bls_search) which are differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives a high-level context of when to use (employment, CPI, wages, etc.) but does not explicitly state when not to use or compare to siblings like bls_latest or bls_search. The agent can infer usage from the mention of 'time series data' vs 'latest data' or 'search,' but no direct guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bls_latestAInspect
Get the most recent data point for a BLS series. Returns latest value and date. Use when you need current figures without historical context.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | No | BLS registration key (optional) | |
| series_id | Yes | BLS series ID (e.g., "LNS14000000") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry burden. Clearly describes read-only behavior ('Get just the most recent data point'), and mentions no destructive effects. Could add details like API rate limits or data freshness, but current is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action and purpose. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with 2 params (one optional), no output schema. Description covers purpose and usage context. Lacks mention of return format or example, but tool is simple enough that this is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds no extra meaning beyond schema descriptions; series_id example is already in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (Get) + resource (most recent data point for a BLS series). Distinguishes from siblings like bls_get_series (multiple data points) and bls_popular_series (popular series list).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States 'quick current-value lookups', implying one-off needs. No explicit when-not or alternatives, but sibling names provide differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bls_popular_seriesAInspect
Browse popular BLS series by category: employment, inflation, wages, housing, productivity. Returns series IDs and descriptions. Start here to explore available data.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Filter by category: housing, employment, prices, wages, productivity (optional, returns all if omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states the output includes IDs and descriptions organized by category. Annotations are empty, so the description carries the full burden. It does not mention any destructive behavior, rate limits, or other behavioral traits, but since the tool is clearly read-only (listing), a score of 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no waste. Front-loaded with action and result. Every sentence provides value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not detail the return structure (e.g., format of IDs and descriptions). However, for a simple listing tool with one optional parameter and a clear purpose, it is nearly complete. Could mention that the list is static or curated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the parameter's description is present). The description adds that categories include housing, employment, prices, wages, productivity, but this is already in the schema description. No additional meaning beyond schema is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists curated popular BLS series with IDs and descriptions, organized by category. It uses specific verbs ('list', 'discover') and distinguishes from siblings like 'bls_search' which searches broadly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Use this to discover available series', which implies a discovery use case but does not explicitly contrast with sibling tools like 'bls_search' or 'bls_get_series'. No exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bls_searchAInspect
Search BLS economic data series by keyword. Returns matching series IDs and titles. Use bls_get_series with an ID to fetch historical data points.
| Name | Required | Description | Default |
|---|---|---|---|
| keyword | Yes | Keyword to search for (e.g., "rent", "construction", "unemployment", "CPI", "housing") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It correctly states it searches a curated catalog and returns matching IDs with descriptions, which implies it's a read operation and not destructive. However, it does not disclose if the catalog is limited in size, whether results are paginated, or if there are rate limits. The description is adequate but not rich in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, concise, and front-loaded with the core action. Every sentence provides essential information: what it does, the source, the domains covered, and the return value. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is simple (one required parameter, no output schema, no annotations), the description is reasonably complete. It explains the source (curated catalog), the search scope (domains), and the return value (series IDs with descriptions). However, it could mention that the search is limited to popular series (implied by 'curated catalog') to set expectations about coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds a sentence about the return value but does not add meaning beyond the schema for the single 'keyword' parameter. The schema already provides example keywords, so the description does not significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for BLS series IDs by keyword from a curated catalog, listing specific domains (housing, employment, etc.) and stating the return value (matching series IDs with descriptions). It distinguishes itself from sibling tools like bls_get_series (which likely retrieves data for a known ID) and bls_popular_series (which likely returns popular series without search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Search for BLS series IDs by keyword', implying it should be used when the user wants to find series by keyword, not for retrieving data or listing popular series. However, it does not explicitly state when not to use it (e.g., when you already have a series ID, use bls_get_series instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose all behavioral traits. It states that it returns 'most relevant tools with names and descriptions', which is accurate. However, it does not mention idempotency, auth requirements, or rate limits. Since it is a search tool (non-destructive), the lack of side-effect disclosure is acceptable, but a 3 is appropriate given the absence of annotations and no mention of potential errors or performance.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose. Every sentence adds value: first sentence defines action, second sentence provides usage context. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 parameters, no output schema, no nested objects), the description is complete. It explains what the tool does, when to use it, and the input format. No return value explanation is needed since the tool's output is implied (list of tools with names and descriptions).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning the schema already describes both parameters (query and limit). The description repeats the purpose of the query parameter ('describing what you need') but does not add new semantics beyond the schema. The limit parameter is not mentioned in the description. With full schema coverage, baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: searching a catalog of tools by natural language description. It specifies the action ('search'), the resource ('Pipeworx tool catalog'), and the input format ('describing what you need'). This distinguishes it from siblings that focus on specific data sources (BLS) or memory operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises to 'Call this FIRST when you have 500+ tools available and need to find the right ones.' This provides clear when-to-use guidance and implies it should be used before other tools. No exclusions or alternatives are needed as it is the entry point.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies a destructive action (Delete) but does not elaborate on what happens after deletion, whether it's reversible, or if there are confirmation steps. With no annotations, the description carries the full burden; it provides minimal behavioral context beyond the obvious.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no unnecessary words. It front-loads the action and specifies the mechanism.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema, no nested objects), the description is minimally complete. However, it could mention that the key must exist or what error occurs if not. It is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions 'by key', which aligns with the required parameter 'key'. The schema already has a description for 'key' ('Memory key to delete'), and the description adds no further semantic detail. With 100% schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Delete) and the resource (a stored memory) and the means (by key). It effectively differentiates from siblings like 'recall' and 'remember' which imply retrieval and storage respectively.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, side effects, or when not to use it. No mention of alternatives like 'recall' for retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral transparency. It explains the dual behavior (retrieve by key or list all), which is accurate and helpful. However, it does not mention any side effects, performance implications, or access restrictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, consisting of two sentences that convey all necessary information without redundancy. It is front-loaded with the main purpose and provides usage nuance efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 optional parameter, no output schema, no annotations), the description sufficiently covers the tool's functionality. It could be enhanced by specifying the format of returned memories or whether list returns keys or full memories, but it is largely complete for its simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes the parameter 'key' with 100% coverage. The description adds meaning by explaining that omitting the key lists all memories, which goes beyond the schema's description of 'Memory key to retrieve (omit to list all keys)' by contextualizing the action.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'retrieve' and the resource 'memory', distinguishing between retrieving by key and listing all memories. It also clarifies the tool's purpose of accessing saved context, differentiating it from sibling tools like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('to retrieve context you saved earlier') but does not explicitly state when not to use it or compare with alternatives like 'remember' or 'forget'. However, the purpose is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. Clearly discloses behavioral traits: session memory (not permanent storage), persistence conditions (authenticated users vs 24-hour anonymous), and no destructive side effects. Good transparency without annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each purposeful: defines action, lists use cases, specifies persistence behavior. Efficiently structured with no waste. Could be slightly more concise by combining sentences, but still clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (2 required params, no output schema, no nested objects), description is complete enough. Covers purpose, use cases, and behavioral nuances. No output schema needed since return is implicit acknowledgment.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and schema descriptions already define parameters well. Description reinforces usage context and provides example keys, adding semantic meaning beyond schema. No further parameter details needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool stores a key-value pair in session memory, with specific verb ('store') and resource ('key-value pair in session memory'). Distinguishes from siblings like 'recall' and 'forget' by describing storage purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states use cases: saving intermediate findings, user preferences, or context across tool calls. Provides context about persistence differences for authenticated vs anonymous users, but doesn't explicitly state when not to use or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!