Oecd
Server Details
OECD MCP — Organisation for Economic Co-operation and Development data
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-oecd
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 14 of 14 tools scored. Lowest: 3.2/5.
Tools target distinct actions on OECD data and Pipeworx features. While ask_pipeworx is broad, it acts as a meta-tool; the other tools have clear, separate purposes without overlap.
All tool names use lowercase with underscores, following a consistent verb_noun pattern. Even single-word memory tools (forget, recall, remember) fit the pattern as verb-only.
14 tools is well-scoped for a data query and analysis server. It covers data access, search, comparisons, memory, and feedback without being excessive or too sparse.
The tool set covers search, retrieval, comparison, entity resolution, claim validation, memory, and feedback. For OECD data, fetching and searching are complete; missing CRUD operations are not expected for a read-only data source.
Available Tools
14 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description carries full burden. It discloses that Pipeworx picks the tool, fills arguments, and returns result. However, it does not mention potential limitations, side effects, or what happens if no data source is suitable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very concise: two sentences plus examples. Purpose is front-loaded. Every sentence adds value. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately covers functionality, delegation, and provides examples. Could specify return format but acceptable for a natural language query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the 'question' parameter well-described. Description adds value by explaining to ask in 'plain English' and provides examples, enhancing understanding beyond schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: answering natural language questions using the best available data source. It includes specific verb (ask) and resource (question answered), and distinguishes from sibling tools by emphasizing no need to browse or learn schemas.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage: when you have a plain English question. It advises against browsing tools or schemas. Examples provide context. Lacks explicit when-not-to-use or alternatives, but sufficient for clear intent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns paired data and resource URIs but does not mention authorization, rate limits, or potential side effects. Behavior is adequately described for a comparison tool, but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently convey purpose, data details, and benefit. No wasted words, front-loaded with core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers return values ('paired data + pipeworx:// resource URIs') and differentiates the two entity types. It does not explain error handling or edge cases, but is sufficient for a well-defined comparison tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by specifying that for company type, values should be tickers/CIKs, and for drug type, names. This clarifies the schema's enum and array descriptions beyond the schema itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Compare') and resource ('entities'), clearly distinguishes two entity types with concrete data examples, and states it replaces 8–15 sequential calls, making its purpose and efficiency benefit explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explains when to use the tool (for side-by-side comparison of 2–5 entities) and implies that for single entities or other tasks, different tools like ask_pipeworx would be appropriate. However, it does not explicitly list exclusions or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It only states it returns tool names and descriptions, missing details like read-only nature, rate limits, or response specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with an example, front-loaded with the main action. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with only 2 parameters, schema covers all. Description mentions return format (names and descriptions) and includes usage context, though output schema missing but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds example queries but doesn't enhance parameter meaning beyond the schema's descriptions of query and limit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Search the Pipeworx tool catalog by describing what you need.' and provides examples, distinguishing it from sibling tools like search_dataflows or ask_pipeworx by specifying it's for finding tools among many.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises 'Call this FIRST when you have 500+ tools available and need to find the right ones,' giving clear context for use. Could be stronger if it mentioned when not to use, but sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It lists data sources and mentions returning pipeworx:// citation URIs. However, it does not cover error conditions, rate limits, or what happens if the entity is not found. The replacement of multiple calls implies efficiency but lacks details on typical response time or pagination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at ~twelve sentences, front-loading the core purpose. It efficiently lists data types, return format, efficiency gain, and an exclusion case. Every sentence adds value with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple data sources) and lack of output schema, the description sufficiently covers inputs, return format (citation URIs), and scope. Minor omissions include absence of performance characteristics or handling of missing data, but overall it provides enough for an agent to decide on usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds meaningful context: it restricts type to only 'company', clarifies value as ticker or CIK, explicitly states names are not supported, and advises using resolve_entity as a fallback. This guidance significantly enhances the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it returns a full profile of an entity across all relevant Pipeworx packs, listing specific data sources (SEC filings, XBRL financials, patents, news, LEI) for company type. It distinguishes itself from siblings like resolve_entity and compare_entities by being a single-call comprehensive profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It clearly specifies when to use this tool versus alternatives: it replaces 10-15 sequential calls, and for federal contracts it directs to usa_recipient_profile. It also advises using resolve_entity first if only a name is available, providing clear prerequisites and exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch_datasetAInspect
Fetch tidy rows from any OECD dataflow. flow_ref examples: "OECD.SDD.NAD,DSD_NAMAIN1@DF_QNA_EXPENDITURE_GROWTH,1.0". The key string is a dot-separated dimension filter (e.g., "USA.....Q" — leave empty to fetch everything). Use start/end periods like "2020-Q1" or "2020". Returns labeled rows; OECD enforces a result-size limit and may truncate broad queries — narrow with key dimensions or shorter time ranges.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Dot-separated dimension key (or empty for all) | |
| limit | No | Cap rows returned (default 5000) | |
| flow_ref | Yes | SDMX dataflow reference | |
| end_period | No | Inclusive end period | |
| start_period | No | e.g., "2020", "2020-Q1", "2020-01" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses return type (labeled rows), result-size limits, and truncation behavior. Describes conservative behavior needed (narrow queries). No annotation contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Compact yet comprehensive. Single paragraph front-loads purpose, then sequentially covers examples, key format, periods, return type, and limitation. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description notes returns 'labeled rows' and warns about truncation. Covers usage, parameters, and limitations. Slightly unclear on exact output structure, but adequate for a fetch tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds significant value: examples for flow_ref and key format, explains period formats, and links parameter usage to quota limits. Baseline 3 exceeded.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it fetches tidy rows from OECD dataflows, with concrete examples of flow_ref and key string. Distinguishes from siblings like search_dataflows (search vs fetch) and list_curated_flows (list vs fetch).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides examples for start/end period formats and advises narrowing queries to avoid truncation. Implicitly tells when to use (when data rows are needed) and when to avoid (broad queries). Could explicitly state alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; the description only states 'delete' without disclosing if deletion is permanent, reversible, or any side effects. For a destructive tool, more transparency is expected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. Front-loaded with action and resource. Could be slightly more informative without harming conciseness, but acceptable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter deletion tool, the description covers the core functionality. However, missing details like return value, idempotency, or error behavior leaves some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (one param 'key' with description). The tool description adds no extra meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Delete a stored memory by key.' uses a specific verb 'delete' and resource 'stored memory', clearly distinguishing from siblings like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., when to delete vs. just forget? when to use remember instead?). The description is too brief to provide context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_curated_flowsAInspect
List OECD dataflow refs we have pre-vetted, grouped by topic (gdp, labour, prices, finance, households, health, demographics, projections, tax, education, environment, technology). Pass the flow_ref to fetch_dataset. For everything else use search_dataflows or browse https://data-explorer.oecd.org.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | No | Optional topic filter |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description implies it's a read-only listing but does not explicitly state behavioral traits like non-destructiveness, caching, or error handling. Adequate for a simple list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with front-loaded purpose. Every sentence adds value; no redundant or missing information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully sufficient for a simple list tool with one optional parameter and no output schema. Provides usage context and alternatives.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes 'topic' as optional. Description adds no additional semantic information beyond the schema. Baseline 3 due to full schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Describes listing pre-vetted OECD dataflow refs grouped by specific topics (gdp, labour, etc.). Clearly distinguishes from siblings like search_dataflows and fetch_dataset.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool (for pre-vetted OECD dataflows) and directs to alternatives (search_dataflows, website) for other needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. Discloses rate limit (5 per identifier per day) and that it's free. Does not detail internal behavior, but for a feedback submission tool, this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each purposeful: states purpose, provides usage guidance, and mentions rate limit. No redundancy, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple feedback tool without output schema, the description covers purpose, how to use, what to include, and constraints. Complete given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (detailed enum and field descriptions). Description adds value by instructing on message content specifics, which goes beyond schema. Thus above baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool sends feedback to Pipeworx team. Lists specific use cases (bug reports, feature requests, missing data, praise) and distinguishes it from sibling tools which are other utilities like ask or fetch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (bug reports, etc.) and provides specific guidance on how to write feedback (describe in terms of Pipeworx tools/data, no end-user prompt verbatim). Mentions rate limit but does not provide when-not-to-use or alternatives, which are not critical given the tool's focused purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It explains the two modes (single retrieval vs listing) but does not disclose whether the operation is idempotent, has side effects, or how errors (e.g., missing key) are handled. This is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two concise sentences that front-load the core functionality. Every phrase is informative, and there is no unnecessary text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has a single optional parameter and no output schema, the description covers both usage modes adequately. It could mention return format or behavior for nonexistent keys, but overall it is sufficient for its simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the only parameter 'key' is described). The description adds 'Memory key to retrieve (omit to list all keys)', which reinforces the schema but does not add significant new semantic detail. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies 'retrieve a previously stored memory by key, or list all stored memories (omit key)', which states the verb and resource distinctively. Among siblings 'remember' and 'forget', this tool is uniquely for retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises using this tool to retrieve context saved earlier, providing context. It does not explicitly mention when not to use it or suggest alternatives like 'remember' for storage, but the guidance is clear enough for basic use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses parallel fan-out sources, since parameter format, and return structure (changes, count, URIs). Lacks explicit read-only hint or rate limits, but adequately informs about behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with main purpose. Every sentence adds value—no filler. Efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, description adequately covers return format and sources. Input parameters fully explained. Slight gap: no mention of pagination or error handling, but sufficient for typical agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds meaningful context: explains since format with examples, notes type is limited to 'company', and gives value examples (ticker, CIK). Goes beyond schema by describing return structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs and resources: 'What's new about an entity since a given point in time' and details fan-out to SEC EDGAR, GDELT, USPTO. It clearly distinguishes from siblings like entity_profile and compare_entities by targeting change-monitoring workflows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states use cases: 'brief me on what happened with X' or change-monitoring workflows. Does not provide explicit exclusions, but context with sibling tools makes alternatives clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses session memory and persistence based on authentication, but does not mention overwrite behavior or side effects. Since no annotations exist, description carries burden but is adequate for a simple KV store.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with purpose, but could be slightly more structured. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage examples, and persistence. Lacks mention of return value, but for a simple store operation, this is acceptable given no output schema is required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with good descriptions for both parameters. The description adds minimal value beyond the schema, thus baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Describes the tool as storing a key-value pair in session memory, which is specific and distinguishes it from siblings like 'forget' and 'recall'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete examples of when to use (save findings, preferences, context) and mentions persistence differences, but lacks explicit exclusions for alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return type (IDs and pipeworx:// URIs) and implies a read-only operation via 'resolve'. While it does not explicitly state that no data is modified, the description is sufficiently transparent about the tool's behavior for a resolution tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with only three sentences: the first states the core purpose, the second elaborates on supported types, and the third notes the output and benefit. Every sentence earns its place, and it is efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no annotations, no output schema), the description covers the input types and output adequately. It explains what the tool does and what it returns. However, it does not describe the output format in detail or mention any pagination or error conditions, which may be acceptable for a simple lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters. The description adds value by providing concrete examples of values for each type (e.g., ticker, CIK, name for company; brand or generic name for drug) and explaining what each type resolves to (SEC EDGAR identity vs RxCUI + ingredient + brand). This goes beyond the schema's brief descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool resolves entities to canonical IDs across data sources, explicitly lists two entity types (company and drug) with their mappings, and mentions returning IDs and URIs. The verb 'resolve' is specific and the purpose is unambiguous, though it does not directly differentiate from the sibling 'compare_entities', but that is irrelevant for purpose clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use the tool (to get canonical IDs for company or drug entities) and even notes that it replaces 2-3 lookup calls, implying efficiency. However, it does not explicitly state when not to use it or what alternatives exist (e.g., compare_entities for comparing entities instead of resolving).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_dataflowsAInspect
Search OECD's SDMX dataflow registry by keyword. Returns matching dataflow refs ready to pass to fetch_dataset. If results are noisy, refine with a more specific keyword or restrict by agency.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 25, max 100) | |
| query | Yes | Keyword (matches dataflow names + descriptions) | |
| agency | No | Optional agency filter, e.g., OECD.SDD.NAD, OECD.ECO, OECD.ENV |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It describes the read-only search behavior and output format, but does not disclose pagination details or any limitations beyond what is in the schema. Adds refinement advice but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: purpose, output and downstream use, and refinement advice. No wasted words; each sentence serves a distinct, valuable purpose. Front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers key aspects: what it does, what it returns (dataflow refs for fetch_dataset), and how to refine queries. Missing output schema details, but implied by the downstream tool reference. Adequate for a search tool with good parameter descriptions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds value by explaining the query matches names and descriptions and suggesting refinement, but this is more usage guidance than additional semantic clarity. Meets baseline but does not significantly exceed it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches the OECD's SDMX dataflow registry by keyword and returns matching dataflow refs. It distinguishes from siblings by explicitly linking the output to fetch_dataset.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance: refine with more specific keyword or restrict by agency if results are noisy. Implicitly suggests using before fetch_dataset, but does not explicitly state when not to use or list alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses use of SEC EDGAR + XBRL, return types (verdict, structured form, actual value with citation, delta), and that it replaces multiple agent calls. No annotations, so description carries full burden; covers key behavioral aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
3 sentences, front-loaded with main purpose, no redundant info. Efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given single parameter and no output schema, description sufficiently covers what the tool does and what it returns (verdict types, citation, delta). Complete for this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Single parameter 'claim' has clear schema description; tool description adds domain and example, enhancing understanding beyond schema. Schema coverage 100%, so baseline 3, description adds value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool fact-checks natural-language claims against authoritative sources, specifies domain (company-financial claims for public US companies), and lists verdict types. Distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Specifies supported claim types (revenue/net income/cash) and scope, implying when to use. Lacks explicit when-not-to-use or alternatives, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!