Climatiq
Server Details
Climatiq MCP — carbon footprint calculator with emission factors
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-climatiq
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 14 of 14 tools scored. Lowest: 3.9/5.
Most tools have clear distinct purposes, but ask_pipeworx is a broad catch-all that could overlap with specific tools like entity_profile or compare_entities, potentially causing confusion if descriptions are not carefully followed.
Names are mostly descriptive with underscores, but there is inconsistency: some are verb_noun (e.g., compare_entities), some are noun_noun (e.g., entity_profile), and three are single verbs (forget, recall, remember). The mix of patterns reduces predictability.
14 tools is appropriate for the server's scope, which includes emissions estimation, company research, memory, and data discovery. Each tool serves a distinct need without overwhelming the agent.
The tool set covers the key operations: search and estimate emissions, resolve entities, get profiles, compare, validate claims, discover tools, and manage memory. The ask_pipeworx tool fills gaps for miscellaneous queries, making the surface comprehensive.
Available Tools
14 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It explains routing, automatic tool selection, argument filling, and result returning. It does not disclose limitations (e.g., rate limits, authenticity of responses), but covers core behavior adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and usage, followed by examples. It is slightly verbose (multiple examples, sources), but every sentence adds value. No wasted words, though could be slightly more compact.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description is comprehensive. It explains what the tool does, how it routes, scope of data sources, and provides concrete examples. It covers all necessary context for the agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter 'question' with schema description 'Your question or request in natural language'. The tool description adds valuable context with examples and data source scope, going beyond the schema. With 100% schema coverage, baseline is 3; the added examples and scope justify a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool answers natural-language questions by automatically selecting the right data source. It lists specific examples and 300+ sources, distinguishing it from siblings like `discover_tools` or `entity_profile` which require explicit tool selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use when the user asks 'What is X?' or similar queries and 'you don't want to figure out which Pipeworx pack/tool to call'. This gives clear when-to-use and implies alternatives (other tools) when specific selection is desired.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description adequately discloses the tool's behavior: it fetches financial data from SEC EDGAR/XBRL for companies and adverse event/trial data from FAERS for drugs, and returns paired data with citation URIs. It is clear this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise, front-loads the purpose, and uses bullet-like structure. Some redundancy exists in the use-case examples, but overall it is efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description gives a clear idea of what is returned (paired data + citation URIs). It covers both entity types and data sources, though it omits details on error handling or edge cases. Complete enough for a typical tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds significant meaning beyond the schema by explaining the 'type' parameter values and the format for 'values', including examples for companies (tickers) and drugs (names).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 companies or drugs side by side, with specific use case examples and data sources for each type. It distinguishes itself from siblings by noting it replaces sequential entity queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists trigger phrases like 'compare X and Y' and states when to use, but does not mention when not to use or alternative tools. However, the description implies that for single entity queries, other tools like entity_profile should be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description states it returns 'top-N most relevant tools with names + descriptions.' It mentions the default limit and max, giving good transparency. It does not claim any destructive or modifying behavior. Could be improved by explicitly stating read-only nature, but overall adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately long but every sentence adds value: purpose, usage guidance, examples, and return format. It is front-loaded with the core purpose. Minor redundancy could be trimmed, but overall it is effective and not verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a discovery tool with no output schema, the description sufficiently covers input parameters and return format (tool names + descriptions). It lacks details on ranking or pagination, but for the intended use case of browsing tools, it is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds value by explaining the query parameter with examples (e.g., 'analyze housing market trends') and clarifying the 'limit' parameter's default and max. It also states the return format, which complements the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Find tools by describing the data or task.' The verb 'find' and resource 'tools' are explicit. It distinguishes from siblings by positioning itself as a discovery tool to be called first when exploring available options, contrasting with the other specific tools like 'entity_profile' or 'search_factors'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when you need to browse, search, look up, or discover what tools exist' and 'Call this FIRST when you have many tools available and want to see the option set.' Provides clear context but does not explicitly list exclusions or alternatives beyond general guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It enumerates returned data (SEC filings, fundamentals, patents, news, LEI) and mentions citation URIs. However, it does not mention any limitations like data freshness, rate limits, or size constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main purpose. It has 3 sentences that each add useful context, but could be slightly more concise without losing key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so description must explain return values. It covers the core data types (SEC, USPTO, news, LEI) but omits details like whether historical data is included or response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by explaining that only 'company' type is supported and that value must be a ticker or zero-padded CIK (not names), directing to resolve_entity for name resolution.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get everything about a company in one call' and lists specific data sources (SEC, USPTO, news, LEI). It contrasts with calling multiple pack tools, distinguishing it from siblings like resolve_entity and compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists example user queries ('tell me about X', 'brief me on Tesla') and instructs to use resolve_entity if only a name is provided, providing clear when-to-use and when-not-to guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_emissionsAInspect
Calculate kgCO₂e for an activity. Pass an emission_factor with at minimum activity_id (from search_factors) or id, plus the required parameters (e.g., { energy: 200, energy_unit: "kWh" }). Returns CO₂e + breakdown.
| Name | Required | Description | Default |
|---|---|---|---|
| parameters | Yes | Activity quantities matching the factor's required unit type (e.g., {energy:200, energy_unit:"kWh"}). | |
| emission_factor | Yes | Selector object — at minimum {activity_id: "..."} or {id: "..."}. May add region, year, source for disambiguation. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It describes the calculation operation and required inputs, implying no side effects. Does not mention rate limits or permissions, but for a read-only calculation this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the purpose and quickly cover input requirements and output. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (calculation, no output schema needed), the description fully covers what the agent needs: inputs, unit hints, and output summary. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds value by explaining the relationship between emission_factor and parameters, giving an example with energy/kWh. This helps the agent understand how to construct the parameter object.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Calculate kgCO₂e for an activity.' The verb 'Calculate' and resource 'kgCO₂e' are specific. Differentiates from sibling tools like search_factors.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit instructions: 'Pass an emission_factor with at minimum activity_id (from search_factors) or id, plus the required parameters.' Implicitly tells when to use this tool after retrieving a factor. Lacks explicit when-not-to-use but is adequate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description must bear the burden. It states 'Delete', implying destructiveness, but does not explicitly confirm irreversibility or describe behavior on missing keys.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no redundancy; first sentence delivers the core purpose, making it efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool with no output schema, the description covers purpose and usage adequately, though it could mention error handling for nonexistent keys.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'key' described as 'Memory key to delete'. The description adds no further meaning beyond the schema, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('previously stored memory by key'), distinguishing it from siblings like 'remember' and 'recall'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use scenarios ('context is stale, task done, clear sensitive data') and recommends pairing with 'remember' and 'recall', offering clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_unit_typesAInspect
List supported unit-type families (Energy / Volume / Distance / Mass / etc.) and their valid units.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It accurately describes the read-only nature of listing data, but does not mention potential ordering or grouping of output. For a simple list without side effects, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that immediately conveys the tool's function. It includes examples in parentheses to clarify the families without unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no annotations, no output schema), the description is fully complete. It tells the agent exactly what the tool returns: families and their units, sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters and 100% coverage. With no parameters to document, the baseline is 4. The description does not add parameter semantics because none exist, but it appropriately describes the output content.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: listing unit-type families (e.g., Energy, Volume, Distance) and their valid units. It uses a specific verb 'list' and identifies the resource, distinguishing it from sibling tools which focus on queries, comparisons, or mutations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when one needs to see valid units or families, but it does not explicitly state when to use this tool versus alternatives. Since no sibling tool has a similar function, the omission is minor, but explicit guidance would improve clarity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses rate limits (5/day), free usage, no quota impact, and team digest reading. Since no annotations exist, this covers key behaviors well, though could mention if feedback is irreversible.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise yet comprehensive: starts with main purpose, then usage categories, constraints, and tips. No fluff, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers all necessary aspects for agent usage: purpose, when to use, parameter details, rate limits, and what team does with feedback. No output schema needed for this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds value beyond schema by explaining enum types, context fields, and message tips. Schema coverage is 100% so baseline 3, but description enriches understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool is for providing feedback (bug, feature, etc.) and distinguishes from sibling tools like 'ask_pipeworx' and 'discover_tools' by focusing on reporting issues or praise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly specifies when to use each feedback type (bug, feature, data_gap, praise) and provides constraints like not pasting user prompts and rate limits.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully covers behavioral traits: retrieval is scoped to an identifier, omitting key lists all keys, and it's a read operation. It does not mention rate limits or error handling, but the basic behavior is well-disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured paragraph. It starts with the main action, then provides examples, scoping, and pairing. Every sentence serves a purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one optional parameter and no output schema, the description is complete: it explains the action, usage context, scoping, and relationships to siblings. It could mention return format or error cases, but the core usage is fully covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with a description for 'key'. The description adds value by giving concrete examples of keys and clarifying that omission lists all keys, which the schema does not explicitly state.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves values saved via remember or lists keys if omitted. It specifies the verb 'retrieve' and resource 'saved memories,' and distinguishes from siblings 'remember' and 'forget' by describing complementary actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides when to use: to look up context without re-deriving, with examples (ticker, address, notes). Mentions scoping and pairing with remember/forget. Lacks explicit when-not-to-use but context is clear given sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavioral traits: it fans out to three sources in parallel, accepts specific date formats, and returns structured data with citation URIs. This is comprehensive and leaves no ambiguity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, using four sentences to convey purpose, usage, inputs, and outputs. Every sentence adds information without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given three required parameters and no output schema, the description covers the tool's behavior and output well. It explains the fan-out and return structure. Minor omission: no mention of potential errors or rate limits, but this is acceptable for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers all three parameters with descriptions. The description adds value by clarifying acceptable formats for 'since' (ISO date vs relative) and 'value' (ticker or CIK), and noting that 'type' only supports 'company'. This enhances understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: reporting what's new with a company. It provides specific example queries and explains the fan-out to multiple sources, distinguishing it from siblings which likely focus on different entity types or historical data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists when to use the tool, with multiple example user queries. It does not, however, mention when not to use it or suggest alternative siblings, which would improve guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description fully explains persistence behavior: authenticated users get persistent memory, anonymous sessions 24 hours. Could mention update behavior for existing keys, but otherwise strong.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise, well-structured with purpose, usage, and persistence details. No wasted sentences; front-loaded with core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description doesn't specify return value. However, for a storage operation, success/failure is often implicit. Overall context sufficient for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers both parameters with descriptions (key examples, value as any text). Description reinforces key-value pair concept but adds minimal new semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool saves data for later reuse, provides concrete examples (resolved ticker, address, preference), and distinguishes it from siblings 'recall' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clearly advises when to use ('when you discover something worth carrying forward') and mentions complementary tools ('Pair with recall to retrieve later, forget to delete').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so description fully carries the burden. It discloses returns include IDs and citation URIs, with examples. Missing details on error or no-match behavior, but safe lookup implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, front-loaded with purpose, no filler. Each sentence adds critical context (purpose, usage, examples, output).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage, examples, and return format. Lacks handling of ambiguous or missing lookups, but sufficient for a medium-complexity tool with full schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds value with concrete examples (e.g., 'Ozempic' → RxCUI 1991306) and clarifies input formats for both parameters beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool resolves entities to official identifiers (CIK, ticker, RxCUI, LEI) with concrete examples. It distinguishes from siblings like 'entity_profile' by focusing on ID lookup needed by other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this BEFORE calling other tools that need official identifiers' and describes scenarios (user mentions a name). Lacks explicit when-not-to-use, but provides strong usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_factorsAInspect
Search Climatiq's emission-factor database. Use to discover factor IDs and required parameters before calling estimate_emissions. Filter by query, category, source (e.g., "EPA", "DEFRA"), region (ISO code), or year.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | 1-based page (default 1) | |
| year | No | Publication year | |
| query | No | Free-text search (e.g., "electricity grid", "concrete production") | |
| region | No | ISO-3166 region code (e.g., "US", "GB", "DE") | |
| source | No | Source dataset (e.g., "EPA", "DEFRA", "BEIS", "ecoinvent") | |
| category | No | Climatiq category (e.g., "electricity", "fuel", "freight", "computing") | |
| unit_type | No | Activity unit type (Energy, Volume, Distance, Mass, etc.) | |
| results_per_page | No | 1-100 (default 25) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that the tool filters by query, category, source, region, and year, but does not mention pagination, rate limits, or whether the operation is read-only. It is adequate but not deeply transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are concise and front-loaded. Every sentence adds value, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 8 parameters and no output schema or annotations, the description covers the essential purpose and filters. It lacks details on pagination and response format, but is adequate for a search tool with a rich schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds workflow context by stating these are filters used to discover factor IDs, which adds value beyond the schema descriptions. However, it does not explain all 8 parameters in detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'search' and the resource 'emission-factor database'. It also explains the purpose: to discover factor IDs and required parameters before calling estimate_emissions, distinguishing it from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use to discover factor IDs and required parameters before calling estimate_emissions', providing clear context for when to use this tool. However, it does not explicitly exclude alternatives or specify when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full responsibility for behavioral disclosure. It explains the returned verdict and components, the source (SEC EDGAR + XBRL), and notes it's v1 limited to company-financial claims. This provides good transparency about capabilities and limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and well-organized: first sentence defines purpose, followed by usage examples, scope, return details, and efficiency gain. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 1 parameter and no output schema, the description covers purpose, usage, behavior, and output adequately. It notes limitations (v1 scope). However, it omits potential errors, rate limits, or prerequisites, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, so baseline is 3. The description adds value by providing natural-language examples and clarifying the claim's scope (financial data for public US companies), which goes beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool validates factual claims against authoritative sources, with a specific scope of company-financial data. It uses strong verbs like 'verify' and 'validate', and distinguishes its purpose from simple data retrieval, though it doesn't explicitly differentiate from siblings like 'ask_pipeworx'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear examples of when to use the tool (e.g., checking truth of statements) and its scope (company-financial claims). However, it lacks explicit when-not-to-use guidance or alternatives, leaving the agent to infer from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!