Trefle
Server Details
Trefle MCP — global plant database (1M+ species)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-trefle
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 16 of 16 tools scored. Lowest: 2.9/5.
Tools blend two disparate domains (plant data and Pipeworx queries). Some Pipeworx tools like ask_pipeworx and compare_entities have overlapping query capabilities, but descriptions provide enough nuance. Plant tools are isolated but coexist awkwardly.
Naming conventions vary widely: plant tools use a verb_noun pattern (get_plant, search_plants), while Pipeworx tools mix verb_noun (ask_pipeworx, validate_claim) and noun phrases (entity_profile, recent_changes). No consistent style across the set.
16 tools is reasonable for a server covering two distinct functions. However, the count could be trimmed if the server had a single focus; as it is, each tool has a defined role and the count is not excessive.
Plant domain covers search and retrieval but lacks create/update/delete. Pipeworx domain offers broad query coverage but gaps exist (e.g., no tool for editing stored data). Overall, the surface is functional but not fully lifecycle-complete for either domain.
Available Tools
16 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It explains the tool routes across many sources, fills arguments, and returns results, but lacks details on limitations, latency, error handling, or authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and starts with a clear summary sentence. Examples are included but could be listed more succinctly. Overall, it conveys necessary information without excess.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the input schema and lack of output schema or annotations, the description provides adequate context by specifying use cases and example queries. It could mention possible failure modes or response format for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter 'question' is present, and its schema description already indicates it accepts natural language. The description adds value with concrete examples, showing the breadth of queries supported.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool answers natural-language questions by automatically picking the right data source. It distinguishes from sibling tools by emphasizing it eliminates the need to manually choose the correct Pipeworx pack or tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when a user asks...' and provides examples. It implies this tool is for broad queries across many sources, but does not explicitly contrast with sibling tools like get_plant or entity_profile.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully shoulders transparency. It details data sources and outputs per type: for company, pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL; for drug, pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. It mentions return of paired data and citation URIs. Missing is behavior on invalid values, but overall sufficiently transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured paragraph that front-loads the core purpose. Every sentence provides distinct value: purpose, usage patterns, data specifics, and efficiency claim. No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description covers key aspects: entity types, data sources, return content, and number of entities. It adequately handles complexity from two distinct workflows and multiple data sources. No critical gaps identified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Both parameters are described in the schema (100% coverage), but the description adds valuable context: it explains the 'type' parameter enumerations and provides concrete examples for 'values' (tickers/CIKs for company, drug names for drug). It also clarifies that data pulled depends on type, which is not in the schema. This goes beyond basic schema information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Compare 2–5 companies (or drugs) side by side in one call.' It specifies the action (compare), resource (companies/drugs), and scope (2–5 entities). This distinguishes it from sibling tools like entity_profile which likely handles single entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage triggers: 'when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings...' It also notes efficiency gains ('Replaces 8–15 sequential agent calls'), implying superiority over alternatives. However, it does not explicitly state when NOT to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It indicates that the tool returns tool names and descriptions, implying a read-only search. However, it does not explicitly state that it has no side effects, requires no authentication, or describe behavior on empty results. While not misleading, it lacks full transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two well-crafted sentences. The first sentence states the action and scope, listing key domains. The second provides usage timing advice. Every part contributes, and there is no redundant or irrelevant content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (two parameters, no nested objects, no output schema), the description covers the main aspects: what it does, when to use it, and the return type (names + descriptions). It slightly lacks detail on the output format (e.g., whether scores are provided) but is sufficient for an agent to understand its purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes both parameters with 100% coverage. The description adds value by providing concrete examples for the query parameter (e.g., 'analyze housing market trends') and clarifying the limit's effect ('top-N most relevant tools'). This enhances understanding beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find tools by describing the data or task.' It lists extensive domains and concludes with 'Returns the top-N most relevant tools with names + descriptions.' This distinguishes it from sibling tools that are specific to particular domains, making its meta-tool role unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when you need to browse, search, look up, or discover what tools exist for...' and 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' This provides clear usage context. It does not explicitly state when not to use it, but the positive guidance is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns recent SEC filings, fundamentals, patents, news, and LEI with citation URIs. It implies a read-only operation. While it doesn't mention rate limits or authentication, the detail about returned data is sufficient for a 'get' tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences covering purpose, usage, return values, and input format. It is dense but not wordy; each sentence adds value. Small improvement could combine usage examples and return description, but overall well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the return values (filings, fundamentals, patents, news, LEI) and mentions citation URIs. It does not detail the structure of each returned item, but the context signals (2 params, enum) are well-covered. The tool's role as a comprehensive profile gatherer is clear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds significant value beyond the schema. It explains that 'type' only supports 'company' currently, and 'value' accepts ticker or CIK, with an explicit note that names are not supported and to use 'resolve_entity' instead. This clarifies the parameters' intended use.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get everything about a company in one call' and lists specific use cases like 'tell me about X' and 'research Microsoft'. It distinguishes itself from siblings by noting it replaces calling 10+ pack tools and explicitly mentions the alternative 'resolve_entity' for name lookups.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool ('when a user asks...') and when not to use it ('Names not supported — use resolve_entity first if you only have a name'). It provides examples of appropriate queries and describes the output content, giving clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It states 'delete', implying destructive action, but does not elaborate on side effects like irreversibility. For a simple deletion, this is mostly adequate, missing only minor details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long with no extraneous information. It is front-loaded with the core action and includes usage guidance, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description is complete enough. It covers the action, usage context, and relationship to siblings. However, it could mention whether deletion is reversible for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes the parameter 'key' as 'Memory key to delete'. The description does not add additional meaning beyond what the schema provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (delete) and the resource (previously stored memory by key). It distinguishes from siblings 'remember' (store) and 'recall' (retrieve), which are explicitly mentioned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides when to use: when context is stale, task is done, or to clear sensitive data. Also suggests pairing with 'remember' and 'recall', offering clear guidance on usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_plantAInspect
Full plant record by Trefle ID. Returns scientific/common names, family, status, image, year, main species.
| Name | Required | Description | Default |
|---|---|---|---|
| plant_id | Yes | Trefle plant ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should clarify if the operation is read-only, idempotent, or requires authorization. It only lists return fields but omits details on side effects, rate limits, or error behavior, leaving some ambiguity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence concisely states the purpose and key return fields, front-loading the most important information with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool with no output schema, the description adequately covers the return value by listing fields. It could be improved by briefly noting output format, but it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (plant_id described). The description adds minimal meaning ('by Trefle ID') beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches a full plant record by Trefle ID and lists the specific fields returned (scientific/common names, family, etc.), distinguishing it from siblings like 'search_plants' or 'get_species'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when a Trefle ID is known and full details are needed, but lacks explicit guidance on when to use this over alternatives, or any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_speciesAInspect
Full species record: growth habit, light/atmospheric humidity/soil requirements, edible / medicinal flags, distribution, images, sources.
| Name | Required | Description | Default |
|---|---|---|---|
| species_id | Yes | Trefle species ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must carry the burden. It lists the fields returned, which provides some transparency, but does not disclose behavioral traits like authorization requirements, rate limits, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that is concise and informative, listing all relevant return fields without extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (one parameter, no nested objects, no output schema), the description adequately specifies the return content. It covers key aspects but might benefit from mentioning the format or any limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (one parameter with a simple description). The tool description adds no additional meaning to the parameter beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a full species record with specific fields listed. It distinguishes itself from sibling tools like get_plant by specifying it is for species records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as search_species. The description only lists the return fields, not usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_distributionsBInspect
Native or introduced distribution zones (TDWG WGSRPD codes — e.g., "NWY" = Norway, "CAL" = California). Returns species lists by zone.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | 1-based page | |
| zone | Yes | TDWG zone code (e.g., "CAL", "FRA", "JAP") | |
| page_size | No | 1-100 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fails to disclose important behavioral traits such as pagination behavior, error handling for invalid zones, or whether the operation is read-only. The tool's read-only nature is implied but not stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, but the initial phrase 'Native or introduced distribution zones' could be misinterpreted as listing zones rather than species. Slight lack of precision.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should explain pagination, default page size, and what happens with invalid zones. It is incomplete for a tool with 3 parameters that requires understanding of TDWG codes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes all parameters. The description adds context about what a zone is (TDWG codes, examples) but does not significantly enhance meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns species lists by TDWG distribution zone, with specific examples (NWY, CAL). It distinguishes itself from sibling tools like search_species by focusing on zone-based listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining species lists per zone but does not explicitly state when to use this tool vs alternatives like search_species or get_plant. No guidance on when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, but description fully discloses rate limit (5 per identifier per day), free nature, no quota impact, and that feedback is read daily and influences roadmap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise set of sentences, each providing distinct info. No fluff. Purpose first, then usage, then constraints, then format advice.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains what happens (team reads, roadmap impact). Covers all relevant aspects for a feedback tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers all parameters with descriptions; description adds context by explaining each enum value and advising on message content (specific, 1-2 sentences). Adds meaningful guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it's for sending feedback to Pipeworx team, listing types (bug, feature, data_gap, praise, other). Distinct from sibling tools which are for data queries or actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly describes when to use: for wrong/stale data (bug), missing tool (feature/data_gap), or positive experience (praise). Also advises against pasting end-user prompt, guiding proper use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description must convey all behavioral traits. It discloses retrieving/list saving, scoping, and pairing, but omits details on behavior when key is missing (e.g., null vs error) or output format for listing keys.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences front-load the core action and each sentence adds unique value: function, use case, scoping/pairing. No superfluous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, so the description should explain return values. It mentions retrieving a value or listing keys, but does not specify the format of the list or behavior in edge cases like missing key. Adequate but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (one parameter with description). The description reinforces the schema's info (omit key to list all) but adds no new details on format, constraints, or examples. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieve a saved value by key or list all keys when omitted. It specifies the verb (retrieve/list) and resource (previously saved values), and distinguishes from sibling tools like 'remember' and 'forget' by mentioning pairing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use: to look up previously stored context without re-deriving. It mentions scoping by identifier and pairing with remember/forget, but lacks explicit exclusions or when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the tool fans out to three sources in parallel and returns structured changes with citation URIs. This provides adequate behavioral context beyond the input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4-5 sentences) and front-loaded with the purpose. Every sentence adds information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return type (structured changes, count, citations). It covers input semantics thoroughly and is complete enough for a query tool with three parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by explaining the 'since' parameter format (ISO date or relative shorthand) and clarifying that 'value' can be a ticker or CIK. This reduces ambiguity for the agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving recent changes for a company. It provides example queries and distinguishes itself from sibling tools like entity_profile and compare_entities by focusing on temporal updates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists example use cases and queries, guiding the agent on when to invoke it. It lacks explicit when-not-to-use guidance but is clear enough for typical scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description fully covers behavioral traits: key-value pair storage, scoped by identifier, and retention policy (persistent for authenticated, 24h for anonymous). No mention of overwriting behavior, but it's implicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise and front-loaded: first sentence states purpose, followed by usage and behavioral details. No superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple key-value store tool with no output schema and full schema coverage, the description covers when, what, scope, and sibling pairing, making it complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. Description adds naming conventions (e.g., 'subject_property') and value type, but does not significantly extend beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool saves data for later reuse, specifies the verb 'save' and resource 'data', and distinguishes from siblings by mentioning 'recall' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear when-to-use guidance with examples (resolved ticker, target address, etc.) and explains scope (persistent vs 24-hour). Lacks explicit when-not-to-use but is otherwise thorough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries the burden. It explains the tool returns IDs and citation URIs, gives examples, and implies it is a read-only query. Lacks details on error handling or multiple matches, but overall transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise, front-loaded with purpose, and every sentence adds value: purpose, when to use, examples, output, workflow guidance. No filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-param tool with no output schema, the description is thorough: covers inputs, outputs, IDs, and workflow positioning. Lacks error scenarios or data freshness, but still meets high bar.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so baseline is 3. The description adds real-world examples ('Apple→AAPL') and contextualizes how parameters map to ID systems, providing value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool looks up canonical identifiers for companies or drugs, specifies the ID systems (CIK, ticker, RxCUI, LEI), and distinguishes from sibling tools by noting it replaces multiple lookup calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use: 'when a user mentions a name and you need the CIK...' and advises to use this BEFORE other tools that need official identifiers. Provides examples and implies when not needed (if identifier already known).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_plantsAInspect
Search plants by common or scientific name. Returns Trefle plant ID, scientific name, common name, family, image. Filter by edible flags.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | 1-based page | |
| query | Yes | Free-text query | |
| edible | No | Only edible plants | |
| page_size | No | 1-100 (default 20) | |
| vegetable | No | Only vegetables |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With empty annotations, description carries full burden. It states returned fields and edible filter, but does not mention pagination behavior (page, page_size), default values, or that results are a list. Lacks details on ordering or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action and scope, followed by returned fields and filter capabilities. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters, no output schema, and no annotations, description covers query and edible filtering but omits pagination parameters (page, page_size) and the vegetable filter. Not fully complete for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. Description adds context matching 'query' to name search and 'edible' to filter, but does not elaborate on 'page' or 'page_size' beyond schema. The 'vegetable' parameter is not mentioned, though 'edible flags' could hint at both booleans.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Search plants by common or scientific name', specifying verb (search) and resource (plants). It distinguishes from sibling tools like 'get_plant' (specific plant) and 'search_species' (species search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for name-based plant search with an edible filter, but does not explicitly compare to alternatives or state when not to use. No guidance on when to prefer this over 'search_species' or 'get_plant'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_speciesCInspect
Search species (the rank below plants — some plants have multiple species). Returns Trefle species ID + scientific name.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | 1-based page | |
| query | Yes | Free-text query | |
| page_size | No | 1-100 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears the full burden. It mentions the return format (ID + scientific name) but does not disclose pagination behavior, rate limits, mutability, or any side effects. The description hints at multiple results but is ambiguous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no unnecessary words. The first sentence immediately conveys the purpose and scope. Every phrase earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given three parameters, no output schema, and no annotations, the description is insufficient. It does not explain pagination, how results are sorted, or expected behavior for empty queries. A search tool should clarify that it returns a list, supports pagination via page and page_size, and possibly mention default values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters. The description adds no extra meaning to the parameters beyond what is in the schema, but it does clarify the return structure, which is helpful context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches species, defines species as ranking below plants, and specifies it returns Trefle species ID and scientific name. It implicitly distinguishes from sibling 'search_plants' by focusing on species rank, but the phrase 'some plants have multiple species' is slightly ambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'search_plants' or 'get_species'. The description does not mention prerequisites, constraints, or preferred use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses supported domain (SEC EDGAR + XBRL), return verdicts, and that it's v1 with limited scope. No contradictions. Could add details on rate limits or authentication, but current is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise (4 sentences), front-loaded with purpose, and efficiently covers usage, domain, return value, and efficiency gains. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description explains return values comprehensively. It covers the supported claim types and sources, making it complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes the 'claim' parameter. Description adds concrete examples of valid claims, which helps the agent understand expected input beyond the schema's description. Since schema coverage is 100%, the description adds value with examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fact-check, verify, validate, or confirm/refute natural-language claims. It specifies the domain (company-financial claims for US public companies) and distinguishes it from generic answering tools like ask_pipeworx.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use the tool (when checking user claims) and provides illustrative examples. It mentions it replaces sequential calls, but does not explicitly state when not to use it or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!