Aviation Weather
Server Details
Aviation Weather MCP — METAR, TAF, PIREPs, AIRMET/SIGMET, station info
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-aviation-weather
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 19 of 19 tools scored. Lowest: 1.8/5.
The tools split into two clear groups: aviation weather (metar, taf, etc.) and general-purpose data (entity_profile, compare_entities, etc.). However, within the general group there is overlap between ask_pipeworx, discover_tools, entity_profile, and validate_claim, making it unclear which to choose for a given task.
Naming conventions are mixed: some tools use snake_case (airmet, stationinfo), some use camelCase (ask_pipeworx, compare_entities), and some are single words (sigmet, metar). This inconsistency makes it harder to predict tool names.
With 19 tools, the count is high for a server named 'Aviation Weather' when most tools are unrelated. The inclusion of memory, company finance, and drug data tools feels out of scope, making the count inappropriate for the implied domain.
For aviation weather, key tools like radar and satellite are missing, but the core METAR/TAF/AIRMET/SIGMET set is present. The general-purpose tools cover a wide range but are shallow (e.g., only one company comparison tool). The ask_pipeworx tool acts as a catch-all, filling some gaps.
Available Tools
19 toolsairmetDInspect
Currently active AIRMETs.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must cover behavior. It only states 'currently active,' implying read-only, but lacks details on update frequency, geographic scope, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely short (4 words) but not appropriately concise; it omits critical information. Under-specification, not efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of sibling weather tools and no output schema, the description is incomplete. It does not explain what AIRMETs are, how to use the format parameter, or how this tool differs from similar ones.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'format' is not described at all. With 0% schema coverage, the description fails to explain its purpose or valid values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Currently active AIRMETs' lacks a clear verb (e.g., retrieve, list), making it vague. It does not differentiate from sibling tools like gairmet or sigmet.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as gairmet, metar, or sigmet. No context on prerequisites or use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It describes the routing behavior and lists many sources, but omits details on read-only nature, latency, authentication, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: purpose first, then usage guidance, then examples. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a single simple parameter and no output schema, the description covers the tool's purpose, usage context, and examples thoroughly. No additional details are necessary for an agent to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'question' already has a clear schema description. The description adds value by providing context on the scope of possible questions and examples, enriching beyond what the schema alone offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it answers natural-language questions by auto-selecting data sources, and distinguishes itself from specialized sibling tools like 'airmet' or 'metar' by being the generic query entry point.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use ('when a user asks... and you don't want to figure out which Pipeworx tool to call') and provides multiple examples, though it does not explicitly state when not to use it (e.g., for direct source-specific queries).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses data sources (SEC EDGAR/XBRL, FAERS, FDA approvals, trials) and explains the tool retrieves data ('pulls'). It does not explicitly state the tool is read-only, but the context implies no side effects. Slightly more explicit behavioral disclosure would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured: front-loaded with purpose, then trigger conditions, then details per type, then return format and efficiency claim. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description explains the return format (paired data + citation URIs) and data sources. It covers both entity types with appropriate detail. For a tool with two simple parameters, the description is complete and actionable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions. The description adds significant value: explains that 'type' determines data pulled (financial vs. clinical), gives examples for 'values' (tickers like AAPL, drug names like ozempic), and clarifies input format expectations beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: comparing 2-5 companies or drugs side by side, with specific verbs like 'compare' and resources (companies with financial data, drugs with clinical data). It distinguishes from sibling tools (e.g., entity_profile, validate_claim) by focusing on side-by-side comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists trigger phrases (e.g., 'compare X and Y', 'how do X, Y, Z stack up') and contexts (tables/rankings). It notes efficiency gains over sequential calls. However, it does not mention when not to use this tool or alternative tools by name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It implies read-only behavior by saying 'returns' but does not explicitly state it is non-destructive or safe. It lacks details about rate limits, authentication needs, or any side effects, but the implied search behavior is mostly clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear opening statement, a list of domains, and explicit usage guidance. It is slightly long due to the domain list, but every part adds value. It is front-loaded and not verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (many siblings, two parameters, no output schema) the description is adequate. It explains the purpose, usage, and return type. The only gap is not explicitly stating it is a read-only operation, but the tool's nature as a discovery tool makes this obvious.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by providing example queries (e.g., 'analyze housing market trends') and explaining the limit parameter's purpose. This goes beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds tools by describing data or task, lists many specific domains, and distinguishes itself from individual data tools like 'airmet' and 'metar' which are sibling tools for specific data types. It also mentions returning top-N relevant tools with names and descriptions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when you need to browse, search, look up, or discover what tools exist' and 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' This provides clear guidance on when to use this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full transparency burden. It discloses that the tool returns multiple data types (SEC filings, fundamentals, patents, news, LEI) with citation URIs. Since it's a read-only operation, the description implies no side effects, which is sufficient. However, it does not explicitly state idempotency or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately sized and well-structured. It front-loads the core purpose and then expands on inputs and outputs. While every sentence earns its place, it could be slightly tighter, but overall it is effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the tool and the lack of an output schema, the description fairly completely enumerates the returned data types (SEC filings, fundamentals, patents, news, LEI). It could mention any limits (e.g., number of recent items), but it is adequate for an agent to understand what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for both parameters. The description adds context beyond the schema: it explains that 'value' can be a ticker or CIK, and clarifies that names are not supported, redirecting to resolve_entity. This additional guidance is valuable for correct usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get everything about a company in one call.' It lists specific query examples and the data returned, making the action unambiguous. It differentiates from the sibling tool resolve_entity by noting when to use that instead.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit usage scenarios are provided, such as user queries like 'tell me about X' or 'research Microsoft.' It also advises when not to use this tool (if only a name is available, use resolve_entity first). This is excellent guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must convey behavior. It states deletion, which implies destructive action. Could mention irreversibility, but adequate for simple tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no waste. Front-loaded with action and usage, concise yet complete.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one parameter and simple operation, description sufficiently covers purpose and usage. No output schema needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter description 'Memory key to delete'. Description adds no additional meaning beyond schema, meeting baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Delete a previously stored memory by key', using specific verb and resource. Distinguishes from siblings like remember and recall.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when to use: 'context is stale, task done, clear sensitive data'. Also suggests pairing with remember and recall, offering clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gairmetCInspect
Graphical AIRMETs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | turb | ice | sfc_winds | lifr | ifr (default any) | |
| format | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description must bear full burden. It only says 'Graphical AIRMETs' with no disclosure of data source, update frequency, request limitations, or output nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (two words), which is good for front-loading, but it sacrifices clarity and completeness. It could be slightly expanded to add value while remaining brief.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters, no output schema, and no annotations, the description fails to provide a complete picture. It does not explain what the output is (e.g., image URL, graphic) or how parameters affect results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no meaning beyond the schema. The 'type' parameter has enum values partially documented in schema, but 'format' is undocumented. Overall, parameter semantics are weak.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Graphical AIRMETs' clearly indicates the tool returns graphical representations of AIRMETs, distinguishing it from text-based siblings like 'airmet'. However, it lacks specifics on output format or purpose elaboration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidelines provided; does not specify when to use this tool versus siblings like 'airmet' or 'sigmet', nor any conditions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
metarAInspect
METAR observations. ids = comma-sep ICAO codes (KSFO,EGLL,LFPG).
| Name | Required | Description | Default |
|---|---|---|---|
| ids | Yes | Comma-separated ICAO airport codes | |
| format | No | json (default) | raw | xml | html | |
| hours_before | No | How far back to look, 1-24 (default 1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses format options (json, raw, xml, html) and a time range parameter (hours_before), but does not explain what happens if no observations are available, error handling, or data source. The behavior is partially transparent but leaves gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of a single complete sentence and an example. It is front-loaded with the tool's purpose. Every word contributes value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple parameters, the description is minimally adequate. It explains the input format and available output formats, but does not describe what the returned data contains (e.g., wind, visibility, temperature). An agent may need to infer or request details for complex use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents all parameters. The description adds only a redundant example for 'ids' (comma-sep ICAO codes). No additional meaning is provided for 'format' or 'hours_before' beyond what the schema describes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'METAR observations' and provides an example of comma-separated ICAO codes, which unambiguously identifies the tool as a weather observation fetcher. It distinguishes itself from siblings like taf (forecasts) and sigmet (significant weather).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use METAR vs alternatives. The name implies it is for current weather observations, but the description does not contrast with siblings like taf or pirep. Usage context is implicitly clear for an agent familiar with aviation weather, but formal guidelines are absent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavioral traits: rate-limited to 5 per identifier per day, free, not counting against quota, team reads daily, and signals affect roadmap. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise at about 6 sentences, front-loaded with main purpose, each sentence adds necessary information without redundancy. Well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers when, how, and constraints. It lacks return value details, but for a feedback tool, the key aspects (rate limit, quota, usage) are complete. Minor gap on confirmation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with descriptions; the description adds value by providing examples for each type and guidance on message content, such as being specific and using tool names, beyond what schema alone provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool as a feedback mechanism for the Pipeworx team, specifying it is for bugs, feature requests, data gaps, and praise. It distinguishes itself from sibling tools by its unique feedback purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (for wrong data, missing tools, praise) and what not to do (avoid pasting end-user prompts). Also provides rate limits and quota information, offering clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pirepCInspect
Pilot reports. Optional id (station), age (hours), distance (NM), level (flight level).
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Center station for proximity filter | |
| age | No | Lookback hours, default 1.5 | |
| level | No | Flight level filter (FL310 = 310) | |
| format | No | json (default) | raw | |
| distance_nm | No | Distance from id in nautical miles, default 200 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only lists optional parameters and does not state that the tool reads data (readOnly), whether it requires authentication, what the output format includes, or pagination. The behavior is opaque.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short (one sentence plus parameter hints) and avoids unnecessary words. However, it could be improved by starting with a verb to immediately convey the action, making it slightly more effective without adding length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters, no output schema, and many sibling tools, the description does not cover what the user gets back (e.g., format options are in schema but no example), how to decide between this and similar tools, or any practical usage tips. It feels incomplete for a production MCP tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds some value by appending units (hours, NM, flight level) and clarifying 'id (station)', but it mostly repeats the schema information without deeper explanation of parameter interactions or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description says 'Pilot reports' but lacks an action verb like 'retrieve' or 'list', making it unclear whether the tool fetches, creates, or manages reports. It does hint at filtering parameters, so the purpose is vaguely understandable but not explicitly stated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool over siblings like airmet, metar, or sigmet. There is no mention of use cases, prerequisites, or alternatives, leaving the agent to infer context from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description explains the key behavior (omitting key lists all) and scoping. Lacks details on error handling for missing keys, but adequate for a retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences that front-load the main purpose, followed by usage context and pairing instructions. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple key-value retrieval tool, the description covers purpose, usage, and parameter. No output schema, but return behavior is implicit. Slight lack of detail on missing key response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds value by clarifying that omitting 'key' lists all saved keys, which is not explicit in the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a value saved via 'remember' or lists all keys when omitted. It distinguishes itself from siblings 'remember' and 'forget' by focusing on retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: to look up context stored earlier, and mentions pairing with 'remember' to save and 'forget' to delete. Also describes scoping per identifier.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description fully discloses behavior: it fans out to three sources (SEC, GDELT, USPTO), returns structured changes with counts and citation URIs, and explains the `since` parameter format. No hidden traits left to inference.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single well-structured paragraph that efficiently packs purpose, usage, parameter behavior, and return format without redundancy. Every sentence provides unique, necessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (fan-out to three sources, 3 parameters, no output schema), the description fully covers what the tool does, how to use it, and what it returns. No gaps remain for an AI agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the description still adds significant value: explains `since` formats in detail (ISO date or relative shorthand), clarifies `value` can be ticker or CIK, and reinforces `type` is only 'company'. This goes beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'What's new with a company in the last N days/months?' and provides explicit examples of user queries. It distinguishes from siblings by focusing on dynamic changes (news, filings, patents) vs. static profiles or comparisons.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives concrete use cases ('when a user asks...') and example queries, but does not explicitly mention when NOT to use the tool or list alternative tools for different scenarios. Still, the context is clear enough for typical usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses key-value pair storage scoped by identifier, persistent for authenticated users, 24-hour retention for anonymous sessions. Does not discuss any destructive aspects as none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four concise sentences, front-loaded with purpose, then usage, then behavioral details, ending with pairing. Every sentence is necessary and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter no-output tool, the description is complete: it covers what, when, how long, and how it relates to siblings. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. Description adds value by providing concrete key examples (e.g., 'subject_property') and clarifying value is 'any text', enhancing beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Save data the agent will need to reuse later' with a specific verb and resource. It distinguishes from siblings by naming recall and forget as companions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use: 'when you discover something worth carrying forward' and provides examples. Mention of recall and forget implies when to use alternatives, but no explicit 'do not use when' statement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explains returns IDs plus citation URIs, and which ID systems are covered. Does not address error cases or performance, but adequate for a lookup tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences with no wasted words. Each sentence adds value: purpose, usage guidance, examples, and benefit statement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explains return type. Mentions IDs and URIs. Could include error conditions but sufficient given sibling tools and context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description provides examples of valid values and output IDs, adding some context but not extra syntax beyond what the schema offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it resolves canonical identifiers for companies or drugs, using specific verbs and resources. It distinguishes itself from siblings like compare_entities and entity_profile by focusing on identifier resolution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this BEFORE calling other tools that need official identifiers' and provides examples. Lacks explicit exclusion criteria but gives good context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sigmetCInspect
Currently active SIGMETs.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | json (default) | raw |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must cover behavioral traits. 'Currently active' hints at a read-only, filtered query, but it does not disclose return structure, pagination, authorization needs, or rate limits. The description adds minimal behavioral context beyond what can be inferred from the tool name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short (4 words), which is concise but at the expense of completeness. It front-loads the key idea but omits useful context. It could benefit from an additional sentence without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple schema and no output schema, the description is incomplete. It does not explain what the response contains (e.g., a list of SIGMETs, details per SIGMET) or any constraints (e.g., geographic bounds). An agent would have to infer behavior from the tool name alone.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a description for 'format' indicating default 'json'. The tool description adds nothing beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Currently active SIGMETs' identifies the resource (SIGMETs) and a temporal filter (active), but lacks an explicit verb (e.g., retrieve, list). It does not differentiate from sibling tools like airmet or gairmet, which serve similar advisory purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., airmet, gairmet). There is no mention of context, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stationinfoBInspect
Aerodrome metadata — lat/lon, elevation, country.
| Name | Required | Description | Default |
|---|---|---|---|
| ids | Yes | Comma-separated ICAO codes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but only mentions metadata fields, omitting whether the operation is read-only, non-destructive, or has any rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very brief (5 words) but lacks necessary context; it is under-specified rather than concisely informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should clarify the return format (e.g., object or array) but does not. It is insufficient for an agent to fully understand the tool's output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the single parameter 'ids' has a description). The tool description adds no extra meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Aerodrome metadata — lat/lon, elevation, country' clearly specifies the resource (aerodrome) and the data fields returned, distinguishing it from weather-focused sibling tools like metar, taf, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, nor any exclusions or specific contexts provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tafCInspect
Terminal Aerodrome Forecasts.
| Name | Required | Description | Default |
|---|---|---|---|
| ids | Yes | Comma-separated ICAO codes | |
| format | No | json (default) | raw | |
| hours_before | No | 1-24 (default 12) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, and the description provides no behavioral information about side effects, authorization needs, rate limits, or output characteristics. The tool could be a read operation, but this is not stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short (3 words) but lacks structure and completeness. It is under-specified, not a full sentence, and does not earn its place without adding value beyond the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has three parameters, no output schema, and multiple siblings, the description fails to explain what a TAF is, how to use the parameters, or what the response contains. It is completely inadequate for informed tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of parameters with descriptions (ids, format, hours_before). The description adds no extra meaning beyond the schema. Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Terminal Aerodrome Forecasts' identifies the data type but lacks a verb indicating action (e.g., retrieve). It clearly names the resource, making the purpose understandable, but does not distinguish it from sibling weather tools like METAR or AIRMET.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings. The description does not mention any context, prerequisites, or alternatives, leaving the agent to infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes scope (v1, company-financial claims via SEC EDGAR+XBRL), return verdicts, and that it replaces multiple sequential calls. No annotations; description covers key behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Dense but efficient; multiple sentences packed with info. Not overly long, but could be slightly more structured. Still earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema or annotations, description is fairly complete: covers what, when, scope, and results. Lacks error handling but adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage 100%; schema already has description with examples. Description adds context about natural-language claims and scope, providing slight additional value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it fact-checks/verifies claims against authoritative sources, specifically for company-financial claims. Distinguishes from siblings (aviation weather tools, entity tools) by being unique.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use ('check whether something a user said is true') with examples. Does not explicitly mention when not to use, but context and sibling tools make it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
windsaloftBInspect
Winds aloft forecast.
| Name | Required | Description | Default |
|---|---|---|---|
| level | No | low | high (default low) | |
| region | No | us | bos | mia | chi | dfw | slc | sfo | alaska |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must convey behavioral traits. The term 'forecast' suggests a read-only operation, but no explicit statement of safety, side effects, or output format is given. This is minimal transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
At just two words, the description is extremely concise and front-loaded. However, the brevity may sacrifice clarity for completeness; it is not verbose, but could benefit from a more informative phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, and the presence of numerous aviation weather siblings, the description should explain what the forecast includes (e.g., wind speed, direction, levels) and any limitations. It lacks such contextual detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% with descriptions for both parameters ('level' and 'region'). The description 'Winds aloft forecast' adds no additional meaning beyond the schema, thus the baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Winds aloft forecast' clearly indicates the resource (winds aloft) and that it provides forecast data, distinguishing it from siblings like metar or taf. However, it lacks an explicit verb like 'retrieve' or 'get', which would make the action more direct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as airmet, sigmet, or metar. The description does not mention any preconditions or exclusions, leaving the agent uncertain about selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!