Edgar
Server Details
EDGAR MCP — SEC EDGAR public APIs (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-edgar
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 10 of 10 tools scored.
Most tools have clearly distinct purposes, especially the EDGAR tools which are well-separated by function (concept, facts, filings, search, lookup). The Pipeworx tools (ask_pipeworx, discover_tools) are distinct but ask_pipeworx could be seen as overlapping with individual EDGAR tools if not for the description emphasizing plain English abstraction. The memory tools (remember, recall, forget) are distinct from the data tools.
The EDGAR tools follow a consistent pattern: edgar_<action> (e.g., edgar_company_concept, edgar_search_filings). The Pipeworx tools use verb-like names (ask_pipeworx, discover_tools) and the memory tools use simple imperative verbs (remember, recall, forget). While the naming conventions differ between groups, within each group they are consistent, and the prefixes help disambiguate domains.
With 10 tools, the set is well-scoped. The tools cover EDGAR data retrieval, Pipeworx querying, and session memory management without redundancy. Each tool serves a distinct purpose and the count is appropriate for the server's domain, neither too few nor too many.
The EDGAR toolset covers key operations: looking up CIK, retrieving filings, searching filings, getting specific concepts, and aggregated facts. Missing operations might include downloading full filing text or viewing filing details, but the core needs are met. The memory tools provide basic CRUD (create, read, delete) but lack an update operation. Overall, the surface is largely complete for common tasks.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that Pipeworx selects the best tool and fills arguments, indicating autonomous behavior. No annotations are provided, so the description carries the full burden. It adds value by explaining the orchestration aspect, though it could mention any limitations or scope of data sources.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences: first states the action, second explains the mechanism, third gives examples. No redundant information, and all sentences serve a clear purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple parameter (one required string) and no output schema, the description is largely complete. It explains how the tool works and provides usage examples. Slightly lower due to lack of mention of return format or potential errors, but still strong for a straightforward tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter, 'question', is described in the schema as 'Your question or request in natural language', and the description elaborates with examples and the instruction to 'just describe what you need'. Schema coverage is 100%, so baseline is 3; the description adds clear usage context, earning a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts natural language questions and returns answers by automatically selecting the best data source. It explicitly distinguishes itself from sibling tools by noting users don't need to browse or learn schemas, and provides concrete examples.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool: when you want to ask a question in plain English without selecting tools or filling arguments. It contrasts with the sibling tools that likely require structured queries or tool selection, giving clear context for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns 'the most relevant tools with names and descriptions' and should be called first, which gives useful behavioral context. A minor gap is that it does not mention if the search is case-sensitive or uses semantic matching, but overall it is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences: first sentence states the action, second sentence describes the output, third sentence provides usage guidance. No wasted words, information is front-loaded, and every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description covers all essential aspects: what it does, how to use it, when to use it, and what output to expect. It is complete for the agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema documents both parameters. The description adds value by providing an example query format (e.g., 'analyze housing market trends') and noting that the query should be a natural language description. It also mentions the default and max for 'limit', which is beyond the schema's description. Baseline 3, plus extra context gives a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches a tool catalog by natural language queries and returns relevant tools with names and descriptions. The verb 'search' and resource 'tool catalog' are specific, and the description distinguishes this tool from sibling tools like 'ask_pipeworx' (which likely answers questions) and 'edgar_*' tools (which are SEC-specific).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear when-to-use guidance and implies it should be used before other tools, establishing a usage order.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
edgar_company_conceptAInspect
Track a specific financial metric over time for a company by CIK number (e.g., revenue, net income). Returns all reported values with dates and filing types.
| Name | Required | Description | Default |
|---|---|---|---|
| cik | Yes | Company CIK number (e.g., "320193" for Apple) | |
| concept | Yes | US-GAAP concept name (e.g., "Revenue", "NetIncomeLoss", "Assets", "Liabilities", "StockholdersEquity", "EarningsPerShareDiluted") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries burden. Description states it returns 'all reported values across filings for a given US-GAAP concept', indicating a read operation, but does not disclose pagination, data limits, or format. Adequate but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, concise and to the point. First sentence states purpose, second adds detail about return. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description is sufficient for a simple retrieval tool with 2 parameters. It covers what is returned (values across filings) but does not mention date range, units, or format, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds no additional meaning beyond the schema examples. The schema already provides clear descriptions for cik and concept.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' and identifies resource 'financial metric over time for a company', clearly distinguishing it from siblings like edgar_company_facts (which likely returns all facts) and edgar_company_filings (which returns filings).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for retrieving a specific financial metric over time but does not explicitly state when to use this versus sibling tools like edgar_company_facts (which might return all concepts) or edgar_company_filings. No exclusion criteria or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
edgar_company_factsAInspect
Get structured financial data for a company by CIK number. Returns revenue, net income, assets, liabilities, and other key metrics with annual and historical values.
| Name | Required | Description | Default |
|---|---|---|---|
| cik | Yes | Company CIK number (e.g., "320193" for Apple). Use edgar_ticker_to_cik to look up if needed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry full burden. It states the tool returns 'key financial metrics... with their most recent annual values', which adds some behavioral context but does not disclose whether data is limited to annual, what period it covers, or if it includes non-annual data. With no annotations, a score of 3 is adequate but lacking depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences front-loading the core purpose and then examples. Efficient and no wasted words, but could be slightly more structured (e.g., separating purpose from usage guidance).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description partially compensates by mentioning return of 'key financial metrics like revenue, net income, assets' but does not specify format or structure of response. For a simple tool with one param, it is fairly complete but could detail what is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with a single parameter cik. The description adds value by clarifying the format ('e.g., "320193" for Apple') and mentioning the sibling tool edgar_ticker_to_cik for lookup, which goes beyond the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Get structured XBRL financial data') and resources ('a company by CIK'), clearly distinguishes itself from siblings by mentioning XBRL financial data and most recent annual values, which is unique among the sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving financial data, but does not explicitly state when to use this tool vs alternatives like edgar_company_concept (which may return specific concepts) or edgar_company_filings (which returns filings metadata). There is no guidance on exclusion or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
edgar_company_filingsAInspect
Get recent SEC filings for a company by ticker (e.g., 'AAPL') or CIK number. Filter by form type (e.g., '10-Q', '10-K'). Returns filing dates, types, and accession numbers.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max filings to return (1-40, default 20) | |
| form_type | No | Filter by SEC form type (e.g., "10-K", "10-Q", "8-K"). Omit for all types. | |
| ticker_or_cik | Yes | Ticker symbol (e.g., "AAPL") or CIK number (e.g., "320193") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Description clearly states it retrieves recent SEC filings, which is non-destructive. It does not mention rate limits or pagination, but for a straightforward retrieval tool, the behavioral implications are clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the purpose and key capabilities. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 simple parameters, no output schema, and no annotations, the description is sufficient. It explains what the tool does, what inputs it takes, and the optional filter. No missing critical information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds value by mentioning 'Optionally filter by form type', but the schema already describes each parameter. No additional semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it gets recent SEC filings for a company, accepts ticker or CIK, and optionally filters by form type. The verb 'Get' and resource 'SEC filings' are specific, and it distinguishes from sibling tools like edgar_search_filings (which is likely for searching across companies) and edgar_company_concept/facts (which deal with concepts and facts, not filings).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies when to use: when you need recent filings for a specific company. However, it does not explicitly state when not to use or how it differs from edgar_search_filings. Context signals show a sibling edgar_search_filings, but the description does not address the distinction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
edgar_search_filingsAInspect
Search SEC filings by keyword, company name, or topic. Filter by form type (e.g., '10-K', '8-K') and date range. Returns filing metadata, accession numbers, and document links.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1-40, default 10) | |
| query | Yes | Search query (e.g., "artificial intelligence", "Tesla revenue") | |
| end_date | No | End date in YYYY-MM-DD format (e.g., "2024-12-31") | |
| form_type | No | Filter by SEC form type (e.g., "10-K", "10-Q", "8-K", "DEF 14A"). Omit for all types. | |
| start_date | No | Start date in YYYY-MM-DD format (e.g., "2024-01-01") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It mentions full-text search and optional filtering, but does not clarify aspects like rate limits, result order, or whether searches are case-sensitive. The behavior is adequately described for a search tool, but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences: first stating the core function, second giving search examples, third mentioning optional filters. No extraneous information, efficiently front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema, the description could clarify what fields are returned (e.g., filing metadata, excerpts). However, for a straightforward search tool with full schema coverage, the description is reasonably complete for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description restates the search capability but does not add significant meaning beyond what the schema provides, such as format expectations or relationship between start_date and end_date.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs full-text search across SEC EDGAR filings, with specific examples of search types (keyword, company name, topic) and optional filters (form type, date range). This distinguishes it from sibling tools like edgar_company_filings which likely target specific companies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context (search filings by keyword/company/topic) and mentions optional filters, but does not explicitly guide when to use this tool over siblings (e.g., when to prefer edgar_company_filings or edgar_company_concept).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
edgar_ticker_to_cikAInspect
Convert a stock ticker (e.g., 'TSLA') to its CIK number. Returns the CIK identifier and company name for use in other edgar tools.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Stock ticker symbol (e.g., "AAPL", "MSFT", "TSLA") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the tool as a lookup operation, which is non-destructive and read-only. It does not disclose any potential behavioral traits such as rate limits, data freshness, or error handling. With no annotations, a score of 3 is appropriate as it conveys the basic nature without depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded with the primary action and resource. Every word serves a purpose. Excellent conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has a single parameter and no output schema, the description adequately explains the purpose and parameter. However, it could be improved by indicating the format of the returned CIK or any prerequisites. Still, it is minimally complete for a simple lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents the 'ticker' parameter. The description adds context that the CIK is needed for other tools, but does not add meaning beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool looks up a CIK number from a ticker symbol, and specifies the CIK is needed for other EDGAR tools. Verb ('look up') and resource ('CIK number') are specific, and it distinguishes from siblings which are other EDGAR or memory tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the CIK is needed for other EDGAR tools, implying usage context. However, it does not explicitly state when to use this tool vs. alternatives or provide any exclusion criteria. It provides minimal guidance on usage beyond the basic purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears full burden. It indicates a destructive action ('delete') but does not disclose side effects like whether deletion is permanent or reversible, or any authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of six words, front-loaded with the action and resource. It contains no redundant information and is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (1 required parameter, no output schema, no nested objects), the description is sufficient for basic use. However, it lacks context on the effect of deletion (e.g., cascade effects) or confirmation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds no extra meaning beyond the schema; it merely restates the parameter 'key' without clarifying format or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the verb 'Delete' and specifies the resource 'stored memory' with a clear parameter 'key'. It is distinct from sibling tools like 'remember' (store) and 'recall' (retrieve), avoiding ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when you need to remove a memory by key, but it does not explicitly state when not to use it or mention alternatives like 'recall' or 'remember' for non-destructive actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It clearly states the tool is for retrieval (not mutation), and that omitting the key lists all memories. This is sufficient behavioral transparency for a read-only tool. It does not discuss performance or persistence details, but the key behaviors are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. The first sentence states the core function, the second provides usage context. Every word earns its place. Front-loaded with the action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple (1 optional param, no output schema) and the description is sufficient to use it correctly. It explains both retrieval modes and the context (session memory). Without an output schema, it might benefit from mentioning the return format (e.g., 'returns the value as a string'), but the description is complete enough for a simple tool given the context signals.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by explaining the dual behavior: 'Retrieve by key' vs 'list all' when key is omitted. This goes beyond the schema description ('omit to list all keys') by explaining the context of use. It clarifies the optionality and the effect of omission.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieve a previously stored memory by key, or list all stored memories (omit key).' The verb 'retrieve' and the resource 'memory' are specific, and it distinguishes between two modes (by key vs. list all). The tool name 'recall' is well-aligned, and it is distinct from siblings like 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context for when to use: 'to retrieve context you saved earlier in the session or in previous sessions.' This gives good timing guidance. However, it does not explicitly mention when NOT to use or contrast with alternatives like 'ask_pipeworx' or other tools. No exclusions or alternative tool names are given, so it's slightly less than perfect.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses persistence behavior (authenticated vs anonymous) beyond what annotations provide. Lacks mention of overwrite behavior on duplicate keys, but annotations are absent so description does well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences, front-loaded with verb and resource, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple (2 string params, no output schema), and description covers purpose, usage, and persistence. Could mention key uniqueness or overwrite behavior, but otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters. Description reinforces purpose of key-value pair and provides example keys, adding context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states verb 'store' and resource 'key-value pair in session memory', clearly distinguishing from sibling tools like 'recall' (retrieve) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'use this to save intermediate findings, user preferences, or context across tool calls', providing clear when-to-use guidance. Also notes persistence difference between authenticated and anonymous users.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!