Pkg Go Dev
Server Details
Go modules MCP — wraps proxy.golang.org
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-pkg-go-dev
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 15 of 15 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but 'ask_pipeworx' as a catch-all natural language query may overlap with 'discover_tools' and other data retrieval tools. However, descriptions help differentiate, and the Go module tools are clearly separate.
Naming conventions are mixed: some tools use verb_noun (get_go_mod, list_versions), while others use noun_verb (entity_profile, pipeworx_feedback) or verb_thing (compare_entities). This inconsistency could confuse an agent, though all names use underscores.
With 15 tools covering two domains (Go modules and Pipeworx data services), the count is reasonable. It's not excessive and each tool serves a clear role, though the split might justify slightly fewer tools.
The Go module tools cover essential version inspection operations, while Pipeworx tools provide comprehensive data access, comparison, profile, and memory functions. Minor gaps exist (e.g., missing direct dependency search for Go), but core workflows are well-supported.
Available Tools
15 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions routing across many sources and that Pipeworx 'picks the right tool, fills arguments, returns the result.' However, it does not disclose potential failure modes, rate limits, or authentication requirements, leaving some behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that gets to the point quickly. It includes examples that aid understanding without excessive verbosity. Slightly longer than necessary but still efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema or annotations, the description is reasonably complete. It explains the tool's function, routing behavior, and gives concrete examples. It does not describe the output format, but that is often inferred from the natural language response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear parameter description. The description adds examples and context (e.g., 'What is the US trade deficit with China?') but does not significantly extend beyond the schema's 'Your question or request in natural language'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Answer a natural-language question by automatically picking the right data source.' It provides specific examples and distinguishes from sibling tools that are more specialized (e.g., compare_entities, entity_profile).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly instructs when to use: 'Use when a user asks... and you don't want to figure out which Pipeworx pack/tool to call.' This provides clear context and implies alternatives for more specific queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description discloses data sources (SEC EDGAR/XBRL for companies, FAERS/FDA for drugs) and return format (paired data + citation URIs), and mentions it consolidates many calls. Sufficiently transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is fairly long but well-structured with clear sections for each type. Efficiently communicates key information, though slightly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explains return format (paired data + URIs). Covers purpose, data sources, and types completely for a comparison tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% but description adds significant meaning: explains what each 'type' pulls (revenue, net income, etc. for company; adverse events, approvals, trials for drug) and details 'values' format (tickers/CIKs vs drug names).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it compares 2-5 companies or drugs side by side. It specifies the verb 'compare' and the resources, and distinguishes from siblings by noting it replaces 8-15 sequential agent calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists trigger phrases like 'compare X and Y' and 'X vs Y', and explains when to use each type (company vs drug). It lacks explicit when-not guidance but effectively covers usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool returns 'the top-N most relevant tools with names + descriptions', which is sufficient for a read-only discovery tool. However, it lacks details about default behavior or any potential side effects, but for this purpose it is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that fronts the purpose with a list of domains. While it is somewhat long, it is well-structured and informative without being verbose. Every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and the richness of the schema, the description is complete: it explains the purpose, usage context, and return value. No output schema exists, but the description covers what the tool returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds minimal value beyond the schema; it mentions 'top-N' but does not elaborate on parameter semantics beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool finds tools by describing the data or task, using specific verbs like 'browse', 'search', 'look up', and 'discover'. It distinguishes itself from sibling tools by focusing on tool discovery, which is unique among the listed siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when you need to browse, search, look up, or discover what tools exist for...' and advises 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' This provides clear context and exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It discloses return data types (filings, fundamentals, patents, news, LEI) and input constraints. Does not mention side effects or rate limits, but remains sufficiently transparent for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is informative but slightly long. Front-loads the main purpose and provides examples. Every sentence adds value, though could be trimmed slightly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description lists all return categories and mentions pipeworx:// citation URIs. Covers input constraints and hints at future types. Complete for a complex aggregator tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds key context: type is limited to 'company', value must be ticker or CIK, and names are not supported. This goes beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get everything about a company in one call' and provides specific user query examples. It distinguishes itself from sibling tools like resolve_entity and compare_entities by listing the aggregated data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides when-to-use criteria with example queries. Advises against using with names and directs to resolve_entity instead. References alternative approach of calling multiple tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Clearly states deletion behavior, which aligns with the implied destructive nature. No annotations provided, but description is straightforward; could add note on irreversibility but sufficient for a simple delete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-loading the verb and resource, plus a third sentence for pairing guidance. No unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, description covers purpose, usage context, and sibling pairing completely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter 'key' with schema coverage 100%. Description in input schema already defines it as 'Memory key to delete', so the tool description adds no new semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Delete') and resource ('memory by key'), clearly distinguishing it from sibling tools 'remember' and 'recall'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('context is stale, the task is done, or want to clear sensitive data') and names sibling tools to pair with ('Pair with remember and recall').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_go_modAInspect
Raw go.mod contents for a specific version. Useful for dependency analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| version | Yes | Version | |
| module_path | Yes | Module import path |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations to compensate; description only hints at read operation (raw contents) but lacks details like error behavior or format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with purpose upfront; no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple retrieval but no output schema or behavioral details; could specify return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both parameters with basic descriptions; description adds no extra meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves raw go.mod contents for a version, distinguishing from siblings like get_module_info that might provide structured info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Suggests use for dependency analysis but does not mention alternatives or exclusion conditions among siblings like get_module_info.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_module_infoBInspect
Metadata for a specific version (resolved version, commit time, origin).
| Name | Required | Description | Default |
|---|---|---|---|
| version | Yes | Semver-ish version (e.g. "v1.9.1") | |
| module_path | Yes | Module import path |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It adds context that the tool returns 'resolved version, commit time, origin' beyond the schema. However, it does not explicitly state that this is a read-only operation or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence that is front-loaded and contains no redundant information. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with 2 paramters, the description is somewhat complete, but it lacks description of the output structure (since no output schema is provided). The mention of three fields helps but could be more thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters having descriptions. The description does not add additional meaning beyond the schema for the parameters. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns metadata for a specific version including resolved version, commit time, and origin. It implicitly distinguishes from siblings like list_versions (which lists versions without metadata) and get_go_mod (which gets a file), but could be more explicit with the verb 'retrieve'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings such as latest_version or list_versions. The description does not mention prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
latest_versionBInspect
Most recent released version of a Go module — with the resolved time.
| Name | Required | Description | Default |
|---|---|---|---|
| module_path | Yes | Module import path |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility for behavioral disclosure. It only states the output (most recent version + resolved time) but does not mention caching, network calls, error handling (e.g., module not found), rate limits, or whether the tool is read-only. The lack of any such detail reduces transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that conveys the essential purpose and output. Every word is meaningful, no redundancy. It is well front-loaded with the key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description covers the basic function. However, it lacks details on the output format, potential errors, and behavior in edge cases (e.g., no releases). For a tool with no annotations, slightly more completeness could improve usability.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single parameter 'module_path' with description 'Module import path'. The tool description does not add additional context, such as format examples or validation requirements. Baseline 3 is appropriate as the schema already provides clear parameter meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool returns 'the most recent released version of a Go module' including the resolved time. This effectively distinguishes it from siblings like list_versions (likely returns all versions) and get_module_info (likely returns detailed info for a given version). The verb and resource are specific, avoiding tautology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention scenarios, prerequisites, or exclusions (e.g., when the module has no releases). Compared to siblings, there's no clarification that list_versions should be used for non-latest versions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_versionsCInspect
List all published versions of a Go module.
| Name | Required | Description | Default |
|---|---|---|---|
| module_path | Yes | Module import path (e.g. "github.com/gin-gonic/gin", "golang.org/x/net") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It states the action but does not disclose any behavioral traits such as read-only nature, caching, rate limits, or error handling. The agent learns nothing beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no wasted words. However, it could be expanded slightly to include more context without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers the core purpose but omits details about the return format, potential errors, or pagination. It is minimally adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single parameter. The description adds no extra meaning beyond the schema, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'all published versions of a Go module'. It is specific and distinguishes from siblings like 'latest_version' and 'get_module_info', but could be improved by specifying the return format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is given on when to use this tool versus alternatives, nor are there prerequisites or exclusions mentioned. The description is a single sentence that gives no usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full responsibility. It discloses rate limits (5 per identifier per day), that it's free and doesn't count against quota, and how feedback is processed (digests read daily, affecting roadmap). No contradictions with any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a well-structured paragraph that front-loads the purpose, then guidelines, then constraints. It's fairly concise but could be slightly tighter; however, every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 parameters, no output schema), the description covers all necessary aspects: purpose, usage, constraints, and message content guidance. It is complete for an agent to select and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by explaining the 'context' object (e.g., pack slug, tool name, vertical) and guiding the 'message' content (be specific, 1-2 sentences, 2000 chars max). This enriches the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for informing the Pipeworx team about bugs, missing features, data gaps, or praise, using specific verbs and resources. It distinguishes itself from sibling tools like 'ask_pipeworx' or 'discover_tools' by focusing on feedback rather than inquiries or exploration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool (e.g., for bug reports, feature requests, praise) and what not to do (avoid pasting end-user prompts). It also provides context on rate limits and that it's free, guiding the agent on appropriate usage and alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It explains scoping (by identifier) and implies read-only behavior, but lacks details on errors, rate limits, or destructive hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is 4 sentences, front-loaded with the main action, and every sentence provides essential context without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional param, no output schema), the description is complete: it covers purpose, usage, scoping, and pairing with related tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage for the single optional parameter. The description adds value by explaining the behavior when key is omitted and providing examples of typical keys, going beyond the schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves values saved via 'remember' or lists all keys, using specific verbs and resources. It distinguishes from siblings 'remember' and 'forget' by explicitly mentioning them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides concrete examples of when to use (target ticker, address, notes) and mentions scoping, but does not explicitly state when not to use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the parallel fan-out to three sources, parameter formats, and return structure, but does not mention latency, error handling, or rate limits, which would be valuable for a tool querying external APIs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that front-loads purpose and usage. It is reasonably concise but contains slight redundancy in listing multiple example queries. Overall well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers return structure. Parameters are well-documented. For a tool of moderate complexity (3 params, no nested objects), it provides sufficient context for typical use, though missing details on error handling or latency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers 100% of parameters with descriptions, and the description adds value by explaining the `since` parameter formats (ISO date or relative shorthand) and providing typical usage ('30d', '1m'). This enriches understanding beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves recent changes for a company from multiple sources (SEC, GDELT, USPTO). It provides example user queries and distinguishes itself from sibling tools by its temporal focus and multi-source fan-out.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes explicit 'use when' examples and covers typical monitoring scenarios. However, it does not explicitly state when not to use it or mention alternatives, but the context is sufficient for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description fully discloses behavior: memory is scoped by identifier, authenticated users get persistent memory, anonymous sessions retain for 24 hours, and stores key-value pairs. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise and well-structured: front-loads purpose, then usage, then details. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool complexity is low, schema covers parameters, no output schema needed. Description covers purpose, usage, persistence, and pairing with siblings, making it complete for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage, but description adds value by providing key naming conventions and value types, going beyond the schema's basic descriptions. Baseline 3, plus extra context justifies 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: save data for reuse later across conversations or sessions. It uses specific verb 'Save' and resource 'data', and distinguishes itself from sibling tools like recall and forget.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use the tool: when discovering something worth carrying forward, with examples like resolved ticker, target address, etc. Also mentions scoping by identifier and persistence duration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that the tool returns IDs plus citation URIs, lists specific ID systems (CIK, ticker, RxCUI, LEI), and gives examples. It implicitly indicates a read-only operation but doesn't discuss error conditions or rate limits, though that's acceptable for a simple lookup tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise yet comprehensive: front-loaded purpose, usage guidance, examples, and benefit—all in a few sentences with no unnecessary words. It is well-structured and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description fully covers what the tool does, why to use it, what inputs are expected (with examples), and what outputs are returned (IDs and citation URIs). Given no output schema, it provides sufficient context for an agent to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds value by providing example values and explaining how to use the 'value' parameter (e.g., 'For company: ticker (AAPL), CIK, or name'), which clarifies acceptable inputs beyond the schema's basic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Look up the canonical/official identifier for a company or drug.' It specifies the verb 'look up' and the resource 'canonical identifier,' and gives concrete examples (Apple, Ozempic) that differentiate it from sibling tools like entity_profile or compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when a user mentions a name and you need the CIK...' and 'Use this BEFORE calling other tools that need official identifiers.' It also notes it 'Replaces 2–3 lookup calls,' providing clear when-to-use and benefit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It details return structure (verdict, structured form, actual value with citation, percent delta) and notes it replaces 4-6 sequential calls. It does not disclose potential latency, rate limits, or authentication needs, but for a single-parameter tool, this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with no unnecessary words. It is front-loaded with the core action ('Fact-check, verify...') and efficiently covers purpose, usage, scope, and return format in a few sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter, no output schema, and no annotations, the description is remarkably complete. It explains the tool's purpose, when to use it, its limitations (v1 supports only company-financial claims), and what the response contains. No critical information is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'claim' has a schema description with examples. The description adds context beyond the schema by providing natural-language examples of valid claims (e.g., 'Apple's FY2024 revenue...'). Since schema coverage is 100%, a baseline of 3 applies, but the examples boost it to 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fact-checking natural-language claims against authoritative sources. It provides specific verb+resource ('validate claim') and includes example queries, making it unambiguous. It also distinguishes itself by noting it replaces multiple sequential calls, providing unique value beyond siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when an agent needs to check whether something a user said is true' and gives example phrasings. It also specifies scope (company-financial claims for v1). However, it does not explicitly state when not to use the tool or mention alternative tools for non-financial claims, but the scope limitation is clearly implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!