Skip to main content
Glama

Server Details

Companies House MCP — UK statutory company registry (BYO key)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-companies-house
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 14 of 14 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation4/5

Most tools target distinct resources (company, filings, officers, PSCs) or actions (search, resolve, compare). However, 'ask_pipeworx' is a catch-all that could overlap with other tools, and 'entity_profile' bundles multiple data sources potentially duplicating calls to get_company, get_filings, etc., causing minor ambiguity.

Naming Consistency3/5

The majority of tools follow a verb_noun pattern (e.g., get_company, search_companies), but a few are bare verbs (forget, recall, remember) or noun_noun (entity_profile, pipeworx_feedback). This mixed convention, though still readable, is inconsistent.

Tool Count5/5

With 14 tools covering company search, profile, filings, officers, PSCs, plus memory and cross-entity comparison, the tool count fits the scope well. Each tool has a clear purpose and none seem redundant.

Completeness4/5

The set covers core Companies House operations (search, profile, filings, officers, PSCs) and adds utilities like entity resolution and comparison. Minor gaps exist, such as no direct document retrieval or searching by SIC code, but the core workflow is supported.

Available Tools

16 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that Pipeworx selects the appropriate tool and fills arguments, indicating it is a meta-tool. However, it does not mention potential failure modes, authentication requirements, or any side effects. Additional detail on behavior (e.g., handling ambiguous questions) would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: a clear purpose statement, an explanation of functionality, an encouragement to use natural language, and illustrative examples. Every sentence adds value, and there is no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema) and the presence of many sibling tools, the description provides sufficient context for a meta-tool. It explains the tool's role and gives examples of supported queries. However, it could mention that the answer comes from the best available data source and hint at the range of topics covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'question' is fully described in the schema with 'Your question or request in natural language.' The description adds value by providing concrete examples of acceptable inputs, which helps agents understand the expected format and scope. With 100% schema coverage, the description usefully complements without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: asking questions in plain English and getting answers from the best data source. It provides specific examples (trade deficit, adverse events, 10-K filing) that illustrate the scope. The purpose is distinct from sibling tools, which are more specific (e.g., compare_entities, get_company).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description guides usage by stating 'no need to browse tools or learn schemas — just describe what you need.' It implies when to use this tool (for natural language queries) but does not explicitly list when not to use it or mention alternatives. The examples give clear context for appropriate questions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description covers what data is returned for each type and mentions URIs. It does not address error handling or rate limits, but provides meaningful behavioral insight.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences plus a value-added line on efficiency. It is front-loaded with the main purpose and contains no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains return data types and URIs but does not detail output format. Given no output schema, it covers key aspects well but could add example or structure notes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema descriptions are complete, the description adds domain-specific details (e.g., revenue for companies, adverse-event count for drugs) that explain what each type returns, enriching beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares 2-5 entities side by side, with specific metrics for each type ('company' or 'drug'). It differentiates from siblings (e.g., get_company) by offering a bulk comparison across entities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when comparing up to 5 entities and notes efficiency gains, but lacks explicit when-not-to-use or alternative tool references.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It describes a search function that returns tool names and descriptions, which implies a read-only operation. However, it does not elaborate on side effects, authorization needs, or other behaviors beyond the basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two concise sentences with no redundancy. The first sentence states the core function, and the second provides crucial usage guidance. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (two parameters, no output schema, no annotation), the description adequately covers its purpose and usage. It explains when to use it and what it returns. The return format could be slightly more detailed, but it is sufficient for an AI agent to understand.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so both parameters are already documented. The description adds no additional meaning beyond what the schema provides; it merely echoes the schema content (e.g., 'default 20, max 50' for limit). Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'search' and the resource 'Pipeworx tool catalog', and specifies that it returns relevant tools with names and descriptions. It also distinguishes itself by advising to call it first when many tools are available, setting it apart from sibling tools that serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task,' providing clear context for when to use the tool. It does not explicitly mention when not to use it or name alternatives, but the guidance is direct and helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden of behavioral disclosure. It mentions returning pipeworx:// citation URIs and lists data sources, but does not state whether the operation is read-only, or cover rate limits or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, with four sentences that are front-loaded with the main purpose. Every sentence adds necessary information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multi-source profile) and lack of output schema, the description explains included data sources and URI format. It is mostly complete but could mention any limits or caching behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds significant value by explaining that only 'company' type is supported, giving examples for 'value' (ticker or CIK), and directing users without names to use resolve_entity first.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a full profile of an entity across multiple Pipeworx packs, specifying the resource and action. It distinguishes from siblings by explaining it replaces 10-15 sequential agent calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool over alternatives, such as calling 'usa_recipient_profile' directly for federal contracts. However, it lacks explicit when-not scenarios for other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It states 'Delete' indicating a destructive action, but fails to mention what happens if the key does not exist, whether the operation is idempotent, or any side effects. With zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of five words with no redundant information. Every word serves a purpose, making it highly efficient for agent consumption.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (one parameter, no output schema, no nested objects), the description is relatively complete for a basic delete operation. However, it does not specify the return behavior or error handling (e.g., missing key), which are gaps considering the lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% as the single parameter 'key' is described in the schema. The description adds no additional meaning beyond 'by key'; thus, the baseline score of 3 is appropriate since the schema already carries the semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Delete a stored memory by key' clearly specifies the action (delete), the resource (stored memory), and the method (by key). It effectively distinguishes the tool from siblings like 'remember' (store) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for deleting stored memories but does not provide explicit guidance on when to use it versus alternatives like 'forget' vs 'remember' or 'recall'. No context on when not to use or prerequisites is given, but the purpose is straightforward enough for an agent to infer.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_companyAInspect

Fetch a single UK company profile by company number. Returns registered name, status, type, SIC codes, registered office, accounts/confirmation-statement due dates, and links to officers/filings/charges/PSCs.

ParametersJSON Schema
NameRequiredDescriptionDefault
company_numberYesUK company number (e.g., "00006245")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It effectively lists the returned data (status, SIC codes, due dates, links), but does not disclose behavior for non-existent companies or any required authentication, which is a minor gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two succinct sentences with no waste. The first sentence states the purpose, the second lists returned fields, making it easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple (one param, no output schema). The description covers the main return fields, but omits error handling scenarios and authentication requirements, which would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100% for the single parameter, with an example provided in the schema. The tool description adds no additional meaning beyond what the schema already offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Fetch a single UK company profile by company number', which is a specific verb and resource. It distinguishes from sibling tools like get_filings, get_officers, and search_companies by focusing on the main company record.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives like search_companies. It implies use when you have a company number, but lacks guidance on prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_filingsAInspect

Filing history for a UK company — annual returns, accounts, changes of address/directors, charges, mortgages. Returns category, description, filing date, transaction ID, and links to filed documents.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoOptional filter (accounts | confirmation-statement | officers | etc.)
start_indexNoPagination offset (default 0)
company_numberYesUK company number
items_per_pageNo1-100 (default 25)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description bears full burden. It discloses that it returns specific fields (category, description, etc.) but does not specify behavioral traits like pagination behavior (though parameters exist), rate limits, or read-only nature. Adequate but minimal beyond schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise (one sentence) and front-loaded with key purpose. While efficient, it could be slightly more structured by separating the purpose and returned fields. No redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description should list returned fields, which it does partially (category, description, filing date, transaction ID, links). But parameter count is 4, all described in schema. Missing details like response format or pagination metadata. Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are well-documented in the schema. The description adds 'accounts | confirmation-statement | officers | etc.' as category examples, but this overlaps with the schema's 'Optional filter (accounts | confirmation-statement | officers | etc.)'. No new semantic meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it provides filing history for UK companies, specifies types (annual returns, accounts, changes, charges, mortgages), and lists returned fields (category, description, date, transaction ID, document links). This verb+resource combination distinguishes it from siblings like 'get_company' or 'get_officers'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is provided. While the description implies it is for retrieving filing history, it does not compare to alternatives or specify prerequisites (e.g., need a company number). Usage context is assumed but not clarified.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_officersAInspect

List directors and secretaries for a UK company. Returns name, role, appointment/resignation dates, occupation, country of residence, nationality, and date of birth (month/year only).

ParametersJSON Schema
NameRequiredDescriptionDefault
start_indexNoPagination offset (default 0)
register_typeNodirectors | secretaries | llp-members (optional restriction)
company_numberYesUK company number
items_per_pageNo1-100 (default 35)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden. It lists returned fields but does not disclose pagination behavior (start_index, items_per_page) or result ordering, which are important for a listing tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, front-loaded with the action, and every word is relevant. No unnecessary text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description covers the return fields well. However, it omits pagination details, which are relevant for a list tool with multiple pages. Still close to complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions. The description adds no extra meaning to parameters beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List directors and secretaries for a UK company' with a specific verb and resource, and distinguishes itself from sibling tools like get_company and get_filings by focusing on officers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the purpose; no explicit guidance on when to use or alternatives is provided. The tool is straightforward, but the description lacks when-not-to-use or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_persons_with_significant_controlBInspect

Beneficial-ownership records (PSCs) for a UK company — individuals or entities with significant control. Returns name, kind, nature of control, ownership-of-shares brackets, and country of residence.

ParametersJSON Schema
NameRequiredDescriptionDefault
start_indexNoPagination offset (default 0)
company_numberYesUK company number
items_per_pageNo1-100 (default 25)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must fully disclose behavior. It mentions returned fields but omits that this is a read-only operation, pagination behavior (start_index, items_per_page), or any side effects. The schema implies pagination, but the description does not confirm safety or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys the core purpose efficiently. It includes key details like output fields, but could be slightly more front-loaded with the resource name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description partially covers return fields but omits potential fields like date of birth or ceased dates. Pagination parameters are not explained in context. Adequate but not comprehensive for a 3-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (all 3 parameters described). The description adds no additional meaning beyond the schema, e.g., does not clarify that the company_number is a string of 8 characters or provide hints for start_index and items_per_page usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns beneficial-ownership records (PSCs) for a UK company, specifying fields returned. It differentiates from siblings like get_company and get_officers by focusing on persons with significant control.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_officers or get_company. It does not explain prerequisites or context such as needing a valid UK company number.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses rate limiting and that the tool is free. No annotations provided, so description carries full burden. Does not detail what happens after sending (e.g., asynchronous, response format), but acceptable for a feedback tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no redundant information. Purpose is front-loaded. Every sentence adds meaningful guidance without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficient for a simple feedback tool. Covers purpose, usage, rate limit. Could optionally mention response behavior, but not essential given the nature of the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage and enum descriptions for 'type'. Description adds value by instructing on message content (describe tools/data, avoid verbatim prompts) and reinforcing use cases, enhancing the schema information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action ('Send feedback') and the resource ('Pipeworx team'), and enumerates specific use cases (bug reports, feature requests, missing data, praise). Distinguishes from sibling tools which are primarily data retrieval or memory operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists when to use (bug reports, etc.) and provides formatting guidance (describe tools/data, no verbatim prompts). Mentions rate limit (5 per day). Lacks explicit 'when not to use' but context makes it clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It explains basic behavior (retrieval by key or listing) but does not specify return format, error handling (e.g., missing key), or persistence guarantees. Adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. The verb 'Retrieve' is front-loaded, and the structure is efficient and direct.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description does not explain return values (e.g., what is returned for a missing key). It covers the core functionality but lacks completeness for an agent to fully understand expected outcomes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for 'key'. The description reinforces the parameter's behavior (omit to list all) but adds little beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves a memory by key or lists all memories, using specific verb 'Retrieve' and resource 'memory'. It distinguishes from sibling tool 'remember' which stores memories.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explains when to use: 'to retrieve context you saved earlier in the session or in previous sessions.' It implies alternative (use 'remember' for storing) but lacks explicit exclusion or when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden of disclosure. It transparently explains the parallel fan-out to SEC EDGAR, GDELT, USPTO, and the return structure including total_changes and URIs. It does not mention limitations like rate limits or read-only status, but the described behavior is clear and non-destructive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured. It front-loads the core purpose, then efficiently lists supported sources, parameter behavior, and return format. Every sentence adds value; there is no fluff or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (three data sources, parallel execution, custom URIs) and absence of output schema, the description is complete. It covers entity type constraints, accepted date formats, expected return fields, and concrete use cases. No critical gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already covers all parameters with descriptions (100% coverage). The description adds valuable context: clarifies 'type' is only 'company', provides examples for 'since' (ISO vs relative), and explains 'value' as ticker or CIK. It also suggests default values like '30d'. This enriches the schema without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving recent changes for an entity since a given time. It details the behavior for type='company' and mentions output structure. However, it does not explicitly differentiate from sibling tools like 'get_filings' or 'entity_profile', which might overlap in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage context ('brief me on what happened with X' or change-monitoring workflows) and describes acceptable 'since' formats. It lacks guidance on when not to use this tool or alternatives, but the given context is sufficient for typical scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds session and auth behavior (persistent vs 24-hour). Lacks details on storage limits or side effects beyond basic persistence.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences—purpose, use case, session behavior—no redundancy, front-loaded with core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with 2 string params and no output schema. Description covers purpose, use cases, and session constraints fully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good parameter descriptions. The tool description reinforces usage patterns but doesn't add new semantic meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description starts with 'Store a key-value pair in your session memory' — a specific verb and resource. It clearly distinguishes from sibling tools like 'recall' (retrieve) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases: 'save intermediate findings, user preferences, or context across tool calls.' Implicitly contrasts with siblings. No explicit when-not-to-use but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It reveals the return includes IDs and pipeworx:// URIs, and mentions data sources, but does not disclose error behavior, rate limits, or authentication needs. Adequate but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core action, and includes all essential information without unnecessary words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description explains return values (IDs and URIs) and covers both parameter types with examples. It is complete for a tool with two parameters and an enum, though it could mention what happens on failure (e.g., not found).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for both parameters. The description adds value by explaining the mapping for each type and providing concrete examples (ticker, CIK, brand/generic names), going beyond the schema's basic enum and string description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool resolves an entity to canonical IDs across Pipeworx data sources, specifying two entity types (company, drug) and their target systems (SEC EDGAR, RxCUI). It distinguishes from sibling tools like get_company by claiming it replaces 2–3 lookup calls in a single call.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'in a single call' and 'Replaces 2–3 lookup calls', implying use when multiple lookups would otherwise be needed. It does not provide explicit exclusions or alternatives, but the context is clear for common use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_companiesAInspect

Search UK Companies House by company name. Returns matches with company number, name, status (active/dissolved), type, registered office address, and incorporation date. Use the company_number to look up officers, filings, and PSCs.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCompany name (full or partial)
start_indexNoPagination offset (default 0)
items_per_pageNo1-100 (default 20)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, but the description is straightforward for a read-only search tool. It does not disclose any potential behavioral traits such as rate limits, authentication requirements, or data freshness, which would be useful for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences: the first states the purpose and output, the second provides actionable guidance. No superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately lists return fields and suggests next steps. It covers the necessary information for a simple search tool, though it could mention default pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. The description adds overall context but does not elaborate on parameter usage beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches UK Companies House by company name, specifying the returned fields (company number, name, status, etc.). It distinguishes from sibling tools like get_company which look up by company number.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance on using the result (company_number) for further lookups, indicating how to proceed after a search. However, it does not explicitly state when to use this tool versus alternatives like get_company or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Given no annotations, the description carries full burden. It details the return types (verdict options, extracted form, citation, delta) and the data source (SEC EDGAR+XBRL). It does not mention potential latency or rate limits, but the read-only nature is inferred. Overall, it provides sufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences and one list item clearly stating the action, return value, and value proposition. Every word adds necessary information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one parameter, no output schema, and no annotations, the description is remarkably complete. It covers the tool's capability, supported domain, return structure, and even provides examples of verdict types. An agent can confidently decide when to invoke it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for the single parameter 'claim', including an example. The description adds value by clarifying the acceptable domain ('company-financial claims') and implying the natural-language format, which goes beyond the schema example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: fact-check a natural-language claim against authoritative sources. It specifies the domain (company-financial claims via SEC EDGAR+XBRL) and lists return values (verdict, structured form, actual value with citation, delta), distinguishing it from siblings like compare_entities or get_company.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states that the tool replaces 4–6 sequential agent calls, guiding when to use it for efficiency. It also specifies the supported domain (company-financial claims), implying when not to use (other claim types). However, it does not directly name alternative sibling tools like compare_entities for comparison queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.