Fdic
Server Details
FDIC MCP — FDIC BankFind Suite API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-fdic
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 10 of 10 tools scored. Lowest: 3.4/5.
Most tools have distinct purposes (bank failures, financials, search, etc.), but ask_pipeworx overlaps with others by acting as a catch-all query tool, potentially confusing an agent on which to use.
FDIC tools use a consistent 'fdic_' prefix and snake_case, but the memory tools (forget, recall, remember) and ask_pipeworx/discover_tools use different naming styles (imperative verbs without prefix), creating inconsistency.
10 tools is reasonable for the scope, though ask_pipeworx and discover_tools are meta-tools that could potentially be omitted or integrated, making the set slightly larger than needed.
Covers core FDIC data (institutions, failures, financials) but lacks obvious operations like updating or deleting data (likely read-only), and the memory tools seem out of place for an FDIC domain, leaving gaps in typical CRUD coverage.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It explains that the tool 'picks the right tool, fills the arguments, and returns the result,' indicating autonomous decision-making. However, it does not disclose limitations like potential latency, error handling, or that it may rely on other tools with their own restrictions. The description is clear but lacks depth on failure modes or authorization needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise (three sentences) and front-loaded with the core purpose. Every sentence adds value: first sentence states what it does, second explains how it works, third gives examples. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no nested objects), the description is reasonably complete. It explains the high-level behavior and usage. However, it could mention that results may come from various underlying tools and that response format may vary, but this is a minor gap. The examples help contextualize usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'question' described as 'Your question or request in natural language.' The description adds context by stating it accepts 'plain English' and provides examples, but the schema already captures the parameter's purpose adequately. The description does not add much beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: it takes a natural language question and returns an answer by selecting the appropriate underlying tool and filling arguments. It distinguishes itself from siblings by acting as a universal interface, contrasting with specific tools like fdic_failures or discover_tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool: when you want to 'just describe what you need' without browsing tools or learning schemas. It provides examples, but does not explicitly state when not to use it or mention alternatives, though the context of sibling tools implies those are for specific structured queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description adds value by stating it returns the most relevant tools with names and descriptions, implying a semantic search. Could mention that it does not execute tools, but context is clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, each adding value: purpose, usage guidance, and explicit first-call instruction. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simplicity of tool (search with query, optional limit), description covers all necessary context. No output schema needed as returns tools list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description need not add much. However, description could hint at the nature of 'query' parameter (e.g., semantic search). Already implied by examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches a catalog by describing needs, returns relevant tools with names and descriptions. Distinguishes from siblings as a meta-search tool for tool discovery, with no overlap with other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises to call this FIRST when 500+ tools are available and need to find the right ones. Provides clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fdic_failuresAInspect
Search FDIC bank failures by date range. Returns bank name, location, CERT ID, failure date, acquiring institution, and fund type.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of failure records to return (default 20) | |
| end_date | No | End date filter in MM/DD/YYYY format (e.g., "12/31/2023") | |
| start_date | No | Start date filter in MM/DD/YYYY format (e.g., "01/01/2023") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It discloses that it returns specific fields and allows date range filtering. However, it does not mention if it supports pagination, rate limits, or any ordering beyond 'sorted by most recent.' Since annotations provide no safety hints, a score of 3 is appropriate as the description is adequate but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, concise and front-loaded with the main purpose. Every word earns its place; no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a simple list with three optional parameters and no output schema, the description covers the core functionality and return fields. It could mention default limit or sorting direction (descending by date), but it is largely complete. A 4 reflects minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes all three parameters. The description adds context that the parameters are for date filtering but does not add new meaning beyond what the schema provides. Baseline 3 is correct.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists FDIC bank failures sorted by most recent, and it distinguishes itself from sibling tools like fdic_search_institutions (which likely searches) and fdic_summary (which likely summarizes). It also explicitly mentions optional date filtering and the fields returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies when to use this tool (list failures, filter by date) and implies when not to (e.g., for general institution search or financials). It does not explicitly name alternatives but the sibling context provides that. It could be improved by explicitly saying 'Use this for recent failure records; for broader institution data use fdic_search_institutions.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fdic_financialsAInspect
Get quarterly financial metrics for a bank by CERT ID. Returns assets, deposits, net income, interest income, loan losses, ROA, ROE, efficiency ratio.
| Name | Required | Description | Default |
|---|---|---|---|
| cert | Yes | FDIC certificate number | |
| limit | No | Number of quarterly reports to return (default 8, which is 2 years) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries the full burden. It discloses that data is quarterly, returns specific metrics, and hints at default pagination (limit=8). However, it does not mention read-only nature, rate limits, or whether historical data is complete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. Purpose and key output details are front-loaded. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description lists return fields well. For a simple query tool, this is nearly complete. Minor gap: no mention of error handling if cert invalid.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters. The description adds context by explaining the default limit (8 = 2 years) and listing metrics, but this is not essential beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get'), resource ('financial call report data for a bank'), and identifier ('by CERT number'). It also lists the specific financial metrics returned, distinguishing it from siblings like fdic_failures or fdic_search_institutions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (querying bank financials by cert number) but does not explicitly state when to use this tool versus alternatives. No guidance on when not to use or what prerequisites exist (e.g., requiring a valid cert from fdic_search_institutions).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fdic_get_institutionAInspect
Get detailed profile for an FDIC-insured bank (e.g., CERT "5136"). Returns name, location, assets, deposits, and regulatory details.
| Name | Required | Description | Default |
|---|---|---|---|
| cert | Yes | FDIC certificate number (e.g., "628" for Chase) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It discloses that the tool returns a full institution profile with fields like name, location, assets, and regulatory details. However, it does not mention whether the tool is read-only (presumably safe), any rate limits, or what happens if the cert number is invalid. A 3 is adequate given no contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, clearly structured: first sentence states the primary action (get detailed info by cert), second sentence lists what the return includes. Every word earns its place; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 required param, no nested objects, no output schema), the description is complete enough: it covers the purpose, identifier, and return fields. It could optionally mention error handling or data recency, but that is not critical. The sibling tool list helps provide context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage for the single parameter 'cert', already explaining it's the FDIC certificate number with an example. The description adds the context that this is a unique identifier for a bank, but does not add significant new meaning beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to get detailed information for a specific FDIC-insured bank by its CERT number. It specifies the resource (bank) and the identifier (CERT number), and distinguishes itself from sibling tools like fdic_search_institutions (search vs. get) and fdic_failures (different focus).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use: when you have a specific CERT number and want full institution details. It does not explicitly state when not to use it or name alternatives, but the context of sibling tools (e.g., fdic_search_institutions for searching) provides some differentiation. A clear exclusion statement would improve it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fdic_search_institutionsBInspect
Search FDIC-insured banks by name. Returns institution name, CERT ID, location, total assets, deposits, net income, ROA, ROE, and report date.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (default 10) | |
| search | Yes | Bank or institution name to search for (e.g., "Chase", "Wells Fargo") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must stand alone. It discloses it's a read-only search and lists return fields, but doesn't mention pagination, performance, or any side effects. Adequate for a simple search.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, concise and front-loaded with purpose. Every sentence adds value. Could be slightly more structured but effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, description lists return fields. Lacks details on ordering, result count, or behavior when no results. Sufficient for simple search but leaves gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both parameters documented in schema). The description adds context that search is by name and lists returned fields, but doesn't add meaning beyond the schema for parameters themselves.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it searches for FDIC-insured banks by name and lists return fields. It distinguishes from siblings like fdic_failures and fdic_financials by focusing on institutions, but could be clearer that it returns summary data, not full details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use vs. alternatives like fdic_get_institution (which likely returns a single institution's full details). The description implies search use but doesn't specify when not to use or mention siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fdic_summaryAInspect
Get industry-wide totals for all FDIC-insured banks on a reporting date. Returns total assets, deposits, net income, interest income, loan count, institution count.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Report date in YYYYMMDD format (e.g., "20240331" for Q1 2024) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosing behavior. It clearly states the tool returns aggregate data for a given date, listing the included metrics (total assets, deposits, net income, etc.). This is sufficient for an agent to understand the tool's output and safety (read-only query). No contradictions with annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of moderate length, immediately front-loading the key verb and resource. Every phrase adds value: 'aggregate industry summary data', 'all FDIC-insured institutions', 'given reporting date', and the list of returned metrics. No fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, no output schema, no nested objects) and the presence of sibling tools that cover specific aspects, the description is complete. It defines what data is returned, the required input, and implicitly that no other parameters are needed. An agent can confidently invoke this tool based solely on the description and schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single required 'date' parameter. The description reinforces the date's purpose ('for a given reporting date') and the description in the schema adds format and example, but the description's mention of the date scope adds marginal value by connecting it to the aggregate context. The combination is clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves aggregate industry summary data for FDIC-insured institutions. It specifies the action ('get'), the resource ('aggregate industry summary data'), and the scope ('for a given reporting date'). The sibling tools like 'fdic_failures', 'fdic_financials', 'fdic_get_institution', and 'fdic_search_institutions' focus on specific subsets, making 'fdic_summary' distinct as the summary-level aggregator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining a high-level snapshot of the banking industry, which differentiates it from sibling tools that drill into specific institutions or failures. However, it does not explicitly state when to use this tool versus alternatives, nor does it provide guidance on prerequisites or frequency of data updates.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It correctly indicates a destructive action ('Delete') but does not mention permanence, reversibility, or authorization requirements. For a single-parameter tool, the minimal description is acceptable but lacks additional context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, perfectly concise, front-loaded with the verb and resource. Every word is necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 required parameter, no output schema), the description is complete enough. It explains what the tool does and how to use it. No missing information is critical for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (key described as 'Memory key to delete'). The description adds no new parameter info beyond the schema, but with full coverage, the schema does the heavy lifting. A score of 4 reflects that the schema is sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Delete', the resource 'stored memory', and the method 'by key', distinguishing it from siblings like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool vs alternatives. However, the sibling names imply a CRUD pattern, and the description 'by key' indicates the required identifier, offering some implicit usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses that omitting key lists all stored memories, which is key behavioral detail. No mention of persistence or scope (session vs. cross-session), but context signals are otherwise clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no redundancy. Front-loaded with action and resource, immediately followed by usage guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple schema (1 optional param) and no output schema, description covers the essential behavior and usage context. Does not detail return format (e.g., list of keys vs. full memory), but is adequate for a straightforward retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter, so baseline is 3. Description adds value by explaining the effect of omitting the key ('list all'), which goes beyond the schema's 'omit to list all keys' phrasing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'Retrieve' and resource 'memory by key' with explicit alternative behavior (list all when key omitted). Distinguishes from sibling 'remember' and 'forget' which handle storage and deletion respectively.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'to retrieve context you saved earlier in the session or in previous sessions.' Does not explicitly state when not to use or mention alternatives, but the sibling tools 'remember' and 'forget' provide natural boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses behavioral traits: authenticated users get persistent memory, anonymous sessions last 24 hours. No annotations provided, so description carries full burden and does well. Does not mention if values are overwritten or if there are size limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: first states core purpose, second gives usage guidance with examples, third adds behavioral context. Every sentence adds value, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simplicity (2 params, no output schema), description is nearly complete. Only missing detail is behavior on overwrite or size limits, but these are minor. Good for a straightforward key-value store.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so schema already documents both parameters. Description adds example values for key and notes that value is any text, but does not add significant meaning beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool stores a key-value pair in session memory, specifies the verb 'store' and resource 'key-value pair', and distinguishes from siblings like 'recall' and 'forget' which handle retrieval and deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes when to use ('save intermediate findings, user preferences, or context across tool calls') and implies when not to (for retrieval use 'recall', for deletion use 'forget'). However, no explicit exclusion or alternative naming.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!