Overpass
Server Details
OpenStreetMap Overpass MCP — programmatic queries against the OSM database
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-overpass
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 14 of 14 tools scored. Lowest: 3.6/5.
Many tools have distinct purposes, but there is significant overlap between the general 'ask_pipeworx' and the specialized tools like 'compare_entities', 'entity_profile', and 'validate_claim'. An agent could easily be uncertain which to use, especially since ‘ask_pipeworx’ claims to route across many sources and replace multiple calls.
Tool names follow no consistent pattern: some are single verbs (forget, recall, remember), some are verb_noun (compare_entities, discover_tools, resolve_entity, validate_claim), and others are noun phrases (entity_profile, places_in_bbox, pois_near, recent_changes). This mix of action-oriented and descriptive names without a clear convention reduces predictability.
With 14 tools, the count is within the typical well-scoped range (3-15). However, the server combines three distinct domains (Pipeworx data queries, OpenStreetMap spatial queries, and memory management), which might be slightly overbroad for a single server. Nonetheless, each tool earns its place within its sub-domain.
For the Pipeworx data domain, the tool surface covers key operations: general queries, comparisons, profiles, entity resolution, fact-checking, and feedback. For OSM, basic POI queries and a raw query tool are provided, which is sufficient for spatial discovery. The memory tools (remember, recall, forget) form a complete set. Minor gaps exist (e.g., no reverse geocoding), but overall the surface is fairly complete.
Available Tools
14 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It explains that the tool picks the right tool, fills arguments, and returns results. However, it does not mention potential limitations like latency, rate limits, or whether the operation is read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear definition, usage guidance, list of sources, mechanism explanation, and examples. It is informative but slightly wordy; it could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of routing to 300+ sources and a simple input schema, the description covers the main aspects: purpose, usage, mechanism, and examples. It does not mention output format, but that is acceptable without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'question' has a schema description 'Your question or request in natural language'. The tool description adds significant value by giving examples and explaining that the question is routed to various data sources, which goes beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool answers natural-language questions by automatically selecting the right data source. It distinguishes itself from siblings like 'query' or 'discover_tools' by focusing on routing to multiple sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when a user asks... and you don't want to figure out which Pipeworx pack/tool to call.' This provides clear guidance. However, it does not mention situations where the tool should not be used, such as when specific parameter control is needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses data sources (SEC EDGAR/XBRL for companies, FAERS/FDA/clinicaltrials for drugs) and return format (paired data + citation URIs). Could mention read-only nature but sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, front-loaded with purpose. Each sentence adds value (use cases, data sources, efficiency gain). Could be slightly more succinct but well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Thoroughly covers both entity types, indicates return includes paired data and citation URIs, and explains it collapses many calls. With no output schema, description compensates effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds significant meaning: explains what data 'type' entails per entity (revenue/net income/cash/debt vs adverse events/approvals/trials) and clarifies that 'values' should be tickers or drug names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it compares 2-5 companies or drugs side by side, with specific metrics for each type. Distinct from sibling tools like entity_profile which likely handle single entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists user phrases that trigger its use ('compare X and Y', 'X vs Y', etc.) and notes it replaces 8-15 sequential calls. No explicit when-not-to-use but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that it returns top-N most relevant tools with names and descriptions. This is sufficient for a search-like tool, though it could mention if there is any ranking or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise for its information density. It lists many domain examples efficiently. A minor improvement could be to trim redundant phrasing, but overall well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the return format (top-N most relevant tools with names and descriptions). It covers the tool's purpose, usage, and return value completely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining the query as a natural language description and providing example queries. The limit parameter is explained with default and max values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool finds tools by describing a data or task, and provides extensive examples (SEC filings, FDA drugs, etc.). It also distinguishes itself from siblings by recommending it be called first to explore the option set.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use ('Use when you need to browse, search, look up, or discover what tools exist for:') and suggests calling it first. It does not include explicit exclusions for when not to use, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It describes the data composition and citation format. Although it implies a read operation, it does not explicitly state read-only behavior or any side effects, but the verb 'Get' and the context make this adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is comprehensive yet efficient. It front-loads the purpose and usage, then lists return data. Each sentence contributes meaningful information without redundancy, though it could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (aggregating from multiple sources) and lack of output schema, the description adequately covers inputs, outputs, usage, and limitations. It does not mention data freshness or pagination, but for a single-call profile tool this is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are already documented. The description adds significant value: for 'type' it notes only 'company' is supported with future plans; for 'value' it provides concrete examples (AAPL, 0000320193) and the critical constraint that names are not supported, which is absent from the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb+resource: 'Get everything about a company in one call.' Lists specific return components (SEC filings, fundamentals, patents, news, LEI) and distinguishes from sibling tools by noting it replaces 10+ separate tool calls across multiple data sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit usage scenarios provided (e.g., 'tell me about X', 'research Microsoft'), and a clear limitation: 'Names not supported — use resolve_entity first.' Directly references a sibling tool for alternative approach.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, but description clearly implies destructive mutation. Could add error behavior details (e.g., what if key missing) but sufficient for simple key delete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences no fluff. Each sentence adds value: purpose, usage guidelines, and sibling references.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple tool with one parameter and no output schema. Could mention error handling but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter. Description does not add extra meaning beyond schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (delete), resource (memory), and mechanism (by key). It distinguishes from sibling tools (remember, recall).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use: when context is stale, task done, or to clear sensitive data. Also suggests pairing with remember and recall.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully discloses behavioral traits: rate-limited to 5 per identifier per day, free, doesn't count against tool-call quota, and that the team reads digests daily. It also implies non-destructive feedback submission.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is fairly concise and well-structured, front-loading the core purpose. It could be slightly more concise, but every sentence adds value and the flow is logical.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 3 parameters, enums, and nested objects, and no output schema, the description is comprehensive. It covers usage, behavior, rate limits, and what not to do, making it complete enough for an agent to use successfully.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds some context beyond the schema, such as explaining the 'type' enum values and the optional 'context' parameter. However, the schema already has detailed descriptions for each parameter, so the additional value is marginal.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Tell the Pipeworx team something is broken, missing, or needs to exist.' It specifies the resource (Pipeworx team), the verb (tell), and lists specific use cases (bug, feature, data_gap, praise). It also distinguishes from sibling tools by providing when to use it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool: for bugs, feature requests, data gaps, or praise. It also includes clear exclusions: 'don't paste the end-user's prompt' and mentions rate limits and that it's free. This helps the agent choose correctly among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
places_in_bboxAInspect
Find OSM POIs inside a bounding box. Use for "every park in this area" or "all restaurants in this neighborhood". Bounding box is (south, west, north, east) in degrees.
| Name | Required | Description | Default |
|---|---|---|---|
| tag | Yes | OSM tag filter (e.g., "leisure=park", "amenity=hospital") | |
| east | Yes | Maximum longitude | |
| west | Yes | Minimum longitude | |
| limit | No | Max results (1-1000, default 200) | |
| north | Yes | Maximum latitude | |
| south | Yes | Minimum latitude |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. It specifies coordinate order and degrees but lacks details on output format, pagination, or limits. The 'limit' parameter is not mentioned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus a parenthetical with no redundancy. Front-loads the main action and provides examples. Every sentence is purposeful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Missing details on the 'limit' parameter, return format, and tolerances. For a tool with no annotations or output schema, the description should cover more behavioral aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds value by clarifying bounding box order (south,west,north,east) and coordinate units, but does not go beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it finds OSM POIs inside a bounding box, with specific verb 'find' and resource 'OSM POIs'. Example use cases differentiate it from sibling tools like 'pois_near'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage contexts like 'every park in this area' and 'all restaurants in this neighborhood'. However, it does not mention when not to use or provide direct alternatives to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pois_nearAInspect
Find OSM points of interest within a radius of a lat/lon. Pass an OSM key=value tag like "amenity=cafe", "shop=bakery", "tourism=museum", or just "amenity" to match any value. Returns nodes with names, tags, and coordinates.
| Name | Required | Description | Default |
|---|---|---|---|
| tag | Yes | OSM tag filter (e.g., "amenity=cafe", "shop", "tourism=museum") | |
| limit | No | Max results (1-500, default 100) | |
| latitude | Yes | Center latitude | |
| radius_m | No | Search radius in metres (1-10000, default 1000) | |
| longitude | Yes | Center longitude |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. It discloses that results are nodes with names, tags, and coordinates, but lacks details on rate limits, authentication, or potential side effects. Behavior is adequately described for a read-only query.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise: two sentences cover purpose, parameters, and return format. No unnecessary words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description is complete for a simple query tool: explains input, output, and examples. No output schema, but return values are described. Minor omission of pagination or default limit behavior, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by explaining return structure (nodes with names, tags, coordinates) and showing tag format examples, enhancing parameter understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool's function: finding OSM points of interest within a radius of lat/lon. It provides specific verb ('Find') and resource ('OSM points of interest'), distinguishing it from sibling tools like 'places_in_bbox' which uses a bounding box.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool (for radius-based POI search) and provides tag examples, but does not explicitly state when not to use it or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
queryAInspect
Run a raw Overpass QL query against OpenStreetMap. Use for complex spatial queries the helper tools can't express. Example: [out:json][timeout:25]; area["name"="Berlin"][admin_level=4]->.a; node["amenity"="library"](area.a); out body;. Returns the raw Overpass JSON (elements array with node/way/relation).
| Name | Required | Description | Default |
|---|---|---|---|
| qql | Yes | Full Overpass QL query string. Start with `[out:json][timeout:<n>];` and end with `out body;` (or similar output statement). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries the burden. It explains return format ('raw Overpass JSON (elements array...)') and provides an example query. Missing explicit note that it's read-only, but this is clear from context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences plus an example. Purpose, usage, and example are front-loaded. Every part adds value with no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the one-parameter tool with no output schema, the description fully covers purpose, usage, parameter details (via schema), and return format. The example further clarifies expected inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description covers 100% of the single parameter with detailed guidance (including required prefix/suffix). The description adds an example but does not significantly enhance beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Run a raw Overpass QL query against OpenStreetMap', a specific verb and resource. It distinguishes from sibling tools by noting 'Use for complex spatial queries the helper tools can't express.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use for complex spatial queries the helper tools can't express', providing clear when-to-use context. It implies simpler alternatives exist though they are not named explicitly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses scoping to identifier and behavior when key is omitted. As a read operation, no destructive side effects need mention. Lacks detail on error handling (e.g., key not found), but overall transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states core function, second provides context and pairing. No unnecessary words; front-loaded with key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description lacks details on return format (e.g., what is returned for a single key vs list) and error cases (e.g., key not found). However, for a simple retrieval tool, it covers essential usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single 'key' parameter is well-described in schema (100% coverage). Description adds value by explaining that omitting key lists all keys, which goes beyond schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a value saved via 'remember' or lists all keys if omitted, with concrete examples (ticker, address, notes). It distinguishes from siblings 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('look up context the agent stored earlier') and examples. Implicitly suggests not to re-derive from scratch. Missing explicit when-not-to-use or direct alternatives, but pairing with remember/forget is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully bears the burden of transparency. It discloses the parallel fan-out to multiple sources, the return format (structured changes + count + URIs), and the since parameter format. It does not mention rate limits or result limits, but covers the main behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, using two sentences to convey purpose, usage, and behavior. It front-loads the main action and then provides details. However, the first sentence is slightly long and could be broken for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately describes the return format (structured changes + total_changes + citation URIs). It covers the fan-out logic and parameter details. However, it does not mention potential limitations like result count or pagination, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers 100% of parameters with descriptions, but the description adds significant value: it explains the since format (ISO or relative), provides default values ('30d' or '1m'), and clarifies value (ticker or CIK) and type (only company supported). This goes beyond what the schema alone provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving recent changes for a company from multiple sources. It uses specific verbs and resources (fans out to SEC EDGAR, GDELT, USPTO) and distinguishes from siblings like entity_profile and compare_entities by focusing on temporal changes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit example queries and specifies when to use the tool (e.g., 'what's happening with X?', 'any updates on Y?'). However, it does not mention when not to use it or suggest alternative tools for different contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description fully discloses behavior: key-value scoping by identifier, persistence differences (authenticated vs anonymous), and retention period. This exceeds expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, front-loaded with purpose, no wasted words. Every sentence adds value: purpose, usage, scoping, pairing with siblings.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description covers what the tool does, how to use it, what happens to stored data, and how it fits with siblings. Complete for a simple storage tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with descriptions for both parameters. The description adds context about key-value nature and scoping, but does not significantly enhance parameter meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'save' and resource 'data for reuse', with concrete examples like ticker, address, preference. It clearly distinguishes from siblings by mentioning pairing with 'recall' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: when discovering something worth carrying forward. Mentions complementary tools (recall, forget). Lacks explicit 'when not to use', but context is clear enough for high value.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool returns IDs and pipeworx:// citation URIs, but does not mention error handling, idempotency, external calls, or rate limits. The behavior is moderately transparent for a lookup tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with no wasted words. It front-loads the main action, includes usage context, examples, and a summary statement, all in a few sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description states that the tool returns 'IDs plus pipeworx:// citation URIs' but does not specify the output format or what happens on error (e.g., entity not found). It implies a single canonical identifier but gives multiple IDs in examples, causing slight ambiguity. Given no output schema, more detail would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the description repeats exactly what the schema already says for the 'value' parameter. It adds no new meaning beyond the schema, so the score is at baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a specific verb and resource ('Look up the canonical/official identifier'), lists the types of identifiers returned (CIK, ticker, RxCUI, LEI), and provides concrete examples ('Apple' → AAPL / CIK 0000320193, 'Ozempic' → RxCUI 1991306). This makes the purpose immediately clear and distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool ('when a user mentions a name and you need the CIK...') and the order ('Use this BEFORE calling other tools that need official identifiers'). It lacks explicit mention of when not to use or alternative tools, but the guidance is strong enough to infer proper usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full behavioral disclosure. It describes return values (verdict, structured form, actual value with citation, percent delta) and mentions efficiency (replaces 4–6 sequential calls). It does not mention error handling, permissions, or rate limits, but the core behavior is well-documented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences long, front-loaded with the core action, followed by usage guidance, domain scope, return values, and comparative efficiency. Every sentence is informative with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (one parameter) and no output schema, the description comprehensively covers purpose, usage, limitations, and output structure. It could be slightly enhanced by mentioning error cases or fallback strategies for unsupported claims, but overall it is adequate for an agent to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description adds value by providing natural-language example inputs ('Apple's FY2024 revenue was $400 billion') and clarifying the parameter is a factual claim. This goes beyond the schema's terse description, aiding agent understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fact-checking/verifying natural-language claims against authoritative sources, and specifies the domain (company-financial claims via SEC EDGAR + XBRL). It distinguishes from sibling tools like 'ask_pipeworx' or 'query' by focusing on verification against structured financial data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when an agent needs to check whether something a user said is true' and provides example query patterns. It also notes the scope limitation ('v1 supports company-financial claims'), guiding appropriate use and excluding alternative tools for other domains.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!