Aviationstack
Server Details
Aviationstack MCP — global flight + airport + airline data
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-aviationstack
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 17 of 17 tools scored. Lowest: 1.6/5.
The tools split into two unrelated groups: aviation (airlines, airports, cities, countries, flights, routes) and generic data query tools (ask_pipeworx, compare_entities, etc.). Within the pipeworx group, ask_pipeworx is a catch-all router that overlaps with specialized tools like compare_entities, validate_claim, and resolve_entity, making it unclear which tool to use.
Most tool names follow a consistent lowercase_snake_case pattern. However, there is a mix of noun-only names (airlines, airports, cities, countries, flights, routes) and verb-based names (ask_pipeworx, compare_entities, etc.), which is a minor inconsistency.
The server is named 'Aviationstack' but only 6 of 17 tools are aviation-related. The other 11 are a general-purpose data query suite (pipeworx), making the tool count excessive and mismatched to the server's apparent domain. The scope is unclear and feels bloated for an aviation-focused server.
For aviation, the tools cover basic airline/airport/flight data but lack advanced features like aircraft info or detailed tracking. However, the pipeworx tools can fill many gaps through natural-language queries, albeit in a generic way. The aviation surface is incomplete, but the overall toolset is overpowered for the domain.
Available Tools
17 toolsairlinesCInspect
Airline directory.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| offset | No | ||
| search | No | ||
| iata_code | No | 2-letter IATA | |
| icao_code | No | 3-letter ICAO |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral traits such as pagination (limit/offset), search capabilities, or any side effects. The description is too minimal to inform agent behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is only two words, which is overly terse. While concise, it sacrifices completeness and fails to provide essential context. It does not justify its brevity with sufficient information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters, no output schema, and no annotations, the description is far too incomplete. It does not explain return values, pagination behavior, or how to effectively use the parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only 40% schema description coverage (iata_code and icao_code have descriptions), the description adds no parameter-related information. It fails to compensate for the low schema coverage, leaving parameters like limit, offset, and search unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Airline directory' is a noun phrase that vaguely indicates the tool provides directory data about airlines, but it lacks a verb specifying the action (e.g., list, search). It does not clearly differentiate from siblings like airports or flights.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as airports, flights, or routes. The description does not provide context for appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
airportsCInspect
Airport directory — name, IATA/ICAO codes, country, GPS, timezone.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| offset | No | ||
| search | No | Free-text — name / city | |
| iata_code | No | 3-letter IATA code | |
| icao_code | No | 4-letter ICAO code | |
| country_iso2 | No | ISO 3166-1 alpha-2 country |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description only lists data fields. Fails to disclose behavioral traits like pagination, rate limits, or that it returns a list. For a simple lookup tool, this is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that is concise but lacks structure. It front-loads the purpose but provides no further sections or details. Not overly wordy, but could be more organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 6 parameters and no output schema, the description is incomplete. It omits pagination details, response format, and does not explain how to filter by the listed fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67% (below 80%), and the description does not add meaning to input parameters. It mentions output fields like GPS and timezone, which are not present in the input schema, but does not clarify how parameters like limit/offset or search work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Describes itself as an 'Airport directory' listing fields like name, IATA/ICAO codes, country, GPS, timezone. Clearly indicates it provides airport reference data, distinguishing it from siblings like 'airlines' and 'cities'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Does not mention prerequisites, typical search scenarios, or relationships with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full weight for behavioral disclosure. It explains the core behavior (routing, argument filling, result return) but does not mention limitations, error handling, latency, or output format. This is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with five sentences, front-loading the purpose and usage guidelines, followed by supporting details. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool with no output schema, the description covers the main aspects: what it does, when to use it, and examples. However, it omits details about the response format or potential errors, leaving minor gaps. Still sufficient for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although schema description coverage is 100%, the description adds substantial meaning beyond the schema by explaining the tool's functionality, listing supported data sources, and providing example questions. This enriches the parameter's semantics significantly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool answers natural-language questions by automatically selecting the appropriate data source, and provides explicit examples of query types and data sources, distinguishing it from sibling tools which are specific to particular domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells the agent when to use this tool ('if you don't want to figure out which Pipeworx pack/tool to call') and gives clear trigger examples, contrasting with sibling tools that are more targeted.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
citiesDInspect
City + primary-airport database.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| search | No | ||
| iata_code | No | City IATA code (e.g. "NYC" for all NYC airports) | |
| country_iso2 | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It merely states it is a 'database' without disclosing whether it is read-only, any rate limits, or potential side effects. This is insufficient for an agent to understand behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short, but this is underspecification rather than effective conciseness. It lacks critical information and does not front-load a clear purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters, no output schema, and empty annotations, the description fails to provide sufficient context for an agent to use it correctly. It is completely inadequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only 25% schema description coverage (only iata_code has a description), the description adds no explanation for limit, search, or country_iso2. The description does not compensate for the missing parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'City + primary-airport database' is vague and does not specify what operations the tool performs (e.g., list, search, get). It barely adds context beyond the name and does not distinguish it from sibling tools like airports or countries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., airports, countries). There are no usage scenarios, prerequisites, or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses data sources (SEC EDGAR/XBRL for companies, FAERS for drugs) and return includes citation URIs. Does not mention rate limits or data freshness, but overall transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise (3 sentences) but contains detailed information. Slightly dense but efficient, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given complexity (two entity types, multiple data sources), description provides sufficient context for agent to decide when to invoke. No output schema but mentions return type (paired data + URIs).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds value by explaining the type enum ('company' pulls financial data, 'drug' pulls AE/approval/trial counts) and provides examples for values parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Compare 2–5 companies (or drugs) side by side in one call.' with specific verb and resource. Distinguishes from sibling entity_profile by focusing on comparison of multiple entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit trigger phrases ('compare X and Y', 'X vs Y', 'how do X, Y, Z stack up') and use cases (tables/rankings of financial data or drug events). No explicit when-not-to-use but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
countriesDInspect
Country reference with capital city, currency, etc.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| search | No | ||
| country_iso2 | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description lacks any behavioral traits such as pagination, data source, or side effects. The description is too minimal to disclose behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is concise but severely under-specified. It fails to earn its place by omitting essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, 0% schema coverage, and a vague description, the tool definition is incomplete for an AI agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description does not explain any of the three parameters (limit, search, country_iso2). No added meaning beyond the parameter names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Country reference with capital city, currency, etc.' which indicates it provides country data, but does not specify the action (list, search, get) and is vague. It is better than a tautology but fails to clearly define the tool's function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like 'cities' or 'airports'. No context on filtering, prerequisite, or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description covers key behavior: returns top-N relevant tools with names and descriptions, default/max limits. It lacks details on ranking criteria or error handling, but overall sufficient for understanding the tool's action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise and front-loaded with purpose and usage. The list of examples is helpful but could be slightly trimmed; still well-structured and not overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return format (top-N tools with names+descriptions) and advises calling first. Adequate for a discovery tool, though it could mention the scope (tools on this server only).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds value by providing examples and context for the 'query' parameter and clarifying default/max for 'limit'. Goes beyond the schema's basic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it finds tools by describing data or task, lists example domains, and distinguishes it as a discovery tool to be called first. This sets it apart from sibling tools which are specific data retrieval tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use: when browsing or discovering tools, and instructs to call FIRST when many tools are available. Provides clear context for usage without needing to specify when not to use, as its role is clearly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries the full burden. It discloses the return contents (SEC filings, fundamentals, patents, news, LEI) and mentions citation URIs. It does not explicitly state idempotency or read-only nature, but the context implies a safe lookup.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded paragraph with zero wasted words. It efficiently combines purpose, usage triggers, return details, and parameter constraints without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately details the return (filings, revenue, patents, news, LEI, citation URIs). It misses potential notes on result limits or pagination, but for a profile tool the completeness is high.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, yet the description adds critical context: type is restricted to 'company' (person/place coming soon), value can be ticker or CIK, and names are not supported—requiring prior resolution. This goes well beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'Get everything about a company in one call,' clearly stating the verb and resource. It distinguishes itself from siblings by noting it replaces 10+ pack tools across multiple domains, making its purpose unique.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists trigger phrases ('tell me about X', 'profile of Acme') and provides a clear exclusion: if the user only has a name, use resolve_entity first. This gives precise when-to-use and when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
flightsCInspect
Real-time and scheduled flights with departure/arrival airport, time, gate, status. Combine filters to narrow down.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 1-100 (default 25) | |
| offset | No | 0-based offset | |
| arr_iata | No | Arrival airport IATA | |
| dep_iata | No | Departure airport IATA (e.g. "JFK") | |
| flight_iata | No | IATA flight number (e.g. "AA100") | |
| flight_icao | No | ICAO flight number | |
| airline_iata | No | Airline IATA code | |
| flight_status | No | scheduled | active | landed | cancelled | incident | diverted |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It states the tool returns real-time and scheduled flights, but does not explain what 'real-time' entails, data freshness, authentication needs, or rate limits. Significant gaps remain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise, fitting in a single sentence. However, it is a noun phrase rather than a full sentence, and it could be more structured. Still, it conveys the core purpose efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description outlines key output fields. But it omits pagination details (limit/offset) and ordering. For a search tool with 8 filters, this is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds 'Combine filters to narrow down', which is helpful but not substantial. Baseline is 3, and no strong reason to deviate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly indicates the tool returns flight information with specific attributes (airport, time, gate, status). It distinguishes from siblings like airlines or airports by focusing on flights. However, it lacks a strong verb like 'list' or 'search', making it slightly less direct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings. It vaguely mentions 'Combine filters to narrow down', but this is internal usage advice, not context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description indicates a destructive action ('Delete'), but with no annotations, it could add more behavioral context, such as whether deletion is permanent or irreversible. It is clear enough for a simple operation but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no redundant information. Every sentence adds value: the first states the action, the second provides usage guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 required parameter, no output schema, no annotations), the description covers purpose, usage, and context adequately. Slight gap in behavioral details, but overall complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description does not add significant meaning beyond the schema's parameter description. Baseline 3 is appropriate as the schema already defines 'key' adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Delete a previously stored memory by key.' It uses a specific verb and resource, and distinguishes itself from sibling tools by explicitly mentioning 'remember' and 'recall'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios: 'when context is stale, the task is done, or you want to clear sensitive data.' It also recommends pairing with 'remember' and 'recall', which are siblings, offering clear guidance on when to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behavioral traits: that the team reads digests daily, feedback affects roadmap, is rate-limited, free, and does not count against tool-call quota. With no annotations provided, the description carries the full burden and covers all important aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise yet comprehensive, using three well-structured sentences. It front-loads the purpose and quickly covers usage, constraints, and behavioral context with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters including a nested object) and lack of output schema, the description provides sufficient context: purpose, usage guidelines, enums explained, rate limits, and behavioral impact. It leaves no critical gaps for an AI agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, so the schema already documents each parameter thoroughly. The description does not add significant new semantics about parameters beyond what the schema provides, though it does reinforce the importance of specificity. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for providing feedback to the Pipeworx team, enumerating specific types (bug, feature, data_gap, praise) and their corresponding use cases. It differentiates itself from sibling tools like ask_pipeworx and discover_tools by focusing exclusively on feedback.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool (e.g., 'Use when a tool returns wrong/stale data') and provides clear guidance on what not to do ('don't paste the end-user's prompt'). It also mentions the rate limit of 5 per identifier per day and that it is free and does not count against quota.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description fully explains behavior (retrieve by key, list all) and scoping to identifier. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action, no extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a simple retrieval tool; covers parameter, scope, and sibling context; no output schema needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, description adds context about listing keys on omit and pairing with remember/forget, adding value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves a previously saved value or lists keys, distinguishing it from sibling tools remember and forget.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Specifies when to use (look up stored context) and implicitly when not (use remember to save, forget to delete). Alternatives named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses that the tool fans out to multiple external sources (SEC EDGAR, GDELT, USPTO) and specifies the return format (structured changes, total_changes count, citation URIs). This provides good behavioral transparency though it could mention if any side effects exist (unlikely for a read tool).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single focused paragraph that front-loads the purpose and packs all necessary information efficiently without redundancy. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema and zero annotations, the description comprehensively covers inputs, outputs, and sources. It explains the fan-out to multiple data sources, the return structure, and the citation format, making it self-contained and easy to use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All parameters have schema descriptions, and the description adds significant value: it explains the 'since' parameter accepts both ISO dates and relative shorthand with examples, gives typical usage ('30d' or '1m'), and clarifies that 'value' can be a ticker or CIK. The 'type' parameter is also explained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a clear question 'What's new with a company in the last N days/months?' and provides multiple example user queries. The purpose is highly specific and distinct from sibling tools which are primarily about travel entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool via example queries like 'what's happening with X?' and 'any updates on Y?'. While it doesn't explicitly mention when not to use it, the context is clear and the signals provided are sufficient for an AI agent to decide.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavioral traits: scoped by identifier, persistent for authenticated users, 24-hour retention for anonymous sessions. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, front-loaded with purpose. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple parameters and no output schema, the description completely covers purpose, usage, behavior, and persistence, making it fully self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description provides examples in the schema itself but does not add new meaning beyond the schema's parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'save' and resource 'data to reuse later', differentiating it from siblings recall and forget. It specifies that data is stored as key-value pairs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly describes when to use ('when you discover something worth carrying forward') and mentions complementary tools recall and forget. It also clarifies scoping and persistence behavior.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return format (IDs plus citation URIs) and scope (canonical/official identifiers). Does not mention read-only status, rate limits, or error handling, but for a lookup tool this is adequate given detail level.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each adding value: purpose, usage trigger, example, return type, sequencing advice, efficiency claim. No redundancy or filler. Front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter, no-output-schema tool, the description covers purpose, input variants, output components, and workflow context. Nothing obvious is missing for effective agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with inline parameter descriptions. The description reinforces them with real-world examples (Apple → AAPL, Ozempic → RxCUI) and adds context that the tool consolidates multiple identifier systems, exceeding schema baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'look up' and resource 'canonical/official identifier' with specific examples (CIK, ticker, RxCUI, LEI). Distinguishes from siblings by positioning as a prerequisite identifier resolver.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use – when a user mentions a name needing official identifiers – and provides ordering guidance: 'Use this BEFORE calling other tools that need official identifiers.' Also notes it replaces 2–3 lookup calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
routesDInspect
Scheduled routes between airports.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| arr_iata | No | ||
| dep_iata | No | ||
| airline_iata | No | ||
| flight_number | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, and the description fails to disclose behavioral traits such as whether the tool is read-only, pagination behavior, rate limits, or what happens with empty results. The description alone carries the full burden and is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence, but it is underspecified. It omits crucial information, making it not an efficient use of a short description. True conciseness would pack more value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 optional parameters, no annotations, and no output schema, the description is severely incomplete. It does not explain return values, filtering behavior, or any other context needed to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 5 parameters with no descriptions (0% coverage). The description does not mention any parameter, failing to compensate for the schema's lack of semantics. An agent would have no idea what parameters like 'arr_iata' or 'limit' mean.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Scheduled routes between airports' states the subject and action but is too vague. It does not differentiate from sibling tools like 'flights' or 'airports', which might also relate to routes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., 'flights' for specific flights or 'airports' for airport data). The description provides no context for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the return values (verdict types, structured form, actual value with citation, percent delta) and data source (SEC EDGAR + XBRL). Lacks details on error handling or limitations, but is transparent enough for correct invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, front-loaded with the main action, and each sentence adds value. It efficiently covers purpose, usage, scope, and output without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description thoroughly explains the return values, including types of verdicts, structured form, actual value, citation, and percent delta. It also covers scope and benefits, making it complete for a single-parameter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the parameter 'claim' is already well-described in the input schema with examples. The tool description reinforces the parameter but does not add new semantics beyond what the schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (fact-check, verify, validate) and the resource (natural-language factual claim). It also distinguishes itself from sibling tools by its specific function and scope, such as replacing multiple sequential calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the tool (when checking truth of a user's claim) and provides example queries. Also specifies scope (company-financial claims for public US companies), implicitly defining when not to use it. Could be improved by explicitly mentioning alternatives, but is clear overall.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!