Hibp
Server Details
Have I Been Pwned MCP
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-hibp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 14 of 14 tools scored. Lowest: 3.3/5.
Several tools have overlapping purposes (check_password and check_password_prefix both handle password checks; ask_pipeworx is a broad query tool that could substitute for many others). While descriptions help, the set is not clearly distinct, leading to possible agent confusion.
Most tools follow a verb_noun pattern (check_account, list_breaches, resolve_entity), but there are deviations like pipeworx_feedback (noun_verb) and check_password_prefix (verb_noun_noun). Overall consistent enough to be predictable.
14 tools is a reasonable number for a server that mixes HIBP security features with Pipeworx utilities. It feels slightly heavy but each tool serves a distinct function, so the count is appropriate.
The HIBP-specific tools cover core functionalities (breach lookup, password checking) but miss some features (e.g., paste search). The inclusion of unrelated Pipeworx tools fills gaps for entity resolution and memory, but the overall surface feels incomplete for a dedicated HIBP server.
Available Tools
17 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavior. It mentions that it picks the right tool and fills arguments, but does not explain limitations, failure modes, latency, or data source specifics. The examples help but leave gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences plus example list. Every sentence adds value, with no redundancy. Front-loads the key action and value proposition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers the main use case well. However, it lacks details on behavioral constraints (e.g., scope of questions, error handling), which keeps it from a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'question' has a schema description that already states it's a natural language request. The tool description adds no further semantics beyond the schema. Schema coverage is 100%, so baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool answers natural language queries by selecting the best data source, with concrete examples. It is distinct from sibling tools which are more specific (e.g., password checks, breach lookups).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for any natural language question, with examples. It does not explicitly exclude scenarios or compare to alternatives, but the meta-tool nature is clear. No explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_accountAInspect
Look up breaches an email account has been seen in. REQUIRES a paid HIBP subscription key (pass _apiKey). Returns the set of breach names; combine with get_breach for details.
| Name | Required | Description | Default |
|---|---|---|---|
| account | Yes | Email address | |
| truncate | No | Return only breach names (default true). Set false for full breach objects (counts as 1 query). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses that the tool requires a paid key, returns breach names, and notes the truncate parameter behavior. It does not mention side effects or failure modes, but the core behavior is clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, no extraneous words. It front-loads the main purpose and includes essential usage info without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and no output schema, the description explains the main output (set of breach names) and refers to get_breach for details. It covers the key context: required key, input params, and how to use with siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are fully described in the schema. The description does not add additional meaning beyond repeating that it uses an email account and truncate. It meets the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Look up breaches') and the resource ('an email account has been seen in'). It distinguishes itself from sibling tools by specifying it returns breach names and recommending combining with get_breach for details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states the requirement of a paid HIBP subscription key passed as '_apiKey', telling when to use this tool. It also provides guidance on combining with get_breach for details, differentiating from siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_passwordAInspect
Check whether a password appears in known breach corpora. Uses k-anonymity: the password is SHA-1ed locally, only the first 5 hex chars leave the worker, and the response is filtered to match the rest. Returns pwned count (0 = not seen). The password itself is never transmitted.
| Name | Required | Description | Default |
|---|---|---|---|
| password | Yes | Password to check (stays inside the worker) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavior: SHA-1 hashing, sending only first 5 hex chars, filtering response locally, and that the password never leaves. This is comprehensive for the tool's privacy model.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: first states purpose, second explains mechanism. No extraneous detail, efficiently communicates the tool's value and privacy guarantee.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description succinctly explains the return value ('pwned count, 0 = not seen'). All necessary information about privacy and operation is provided without gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema describes the 'password' parameter as staying inside the worker. The description reinforces this and explains the k-anonymity process, adding security context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks a password against known breach corpora using k-anonymity. It distinguishes itself from the sibling 'check_password_prefix' by explaining the local hashing and prefix-based matching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the privacy-preserving mechanism and the function, implying it should be used to safely check a password without revealing it. However, it does not explicitly mention when not to use it or compare to alternatives like 'check_password_prefix'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_password_prefixAInspect
Direct k-anonymity lookup: pass the first 5 hex chars of a SHA-1 password hash, get back all SHA-1 suffixes with their pwned counts. Use this if you're hashing client-side and only want to send the prefix.
| Name | Required | Description | Default |
|---|---|---|---|
| sha1_prefix | Yes | 5 hexadecimal characters |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes the k-anonymity mechanism and return data (suffixes with counts). Could mention behavior if prefix invalid or no matches, but adequate given no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted text. First sentence defines action, second gives usage guidance. Front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool with no output schema, the description covers what it does, what to send, what to expect back, and when to use it. Fully adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes 'sha1_prefix' as 5 hex chars. Description adds value by explaining it's the first 5 of a SHA-1 hash and the k-anonymity purpose. Adds context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it's a k-anonymity lookup using first 5 hex chars of SHA-1 hash, returning suffixes with counts. Distinguishes from sibling 'check_password' by being a client-side prefix check.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this if you're hashing client-side and only want to send the prefix', providing clear when-to-use context relative to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the return data (paired data + resource URIs) and the metrics for each type, but does not discuss behavioral aspects like read-only nature, authentication, or error handling. Since no annotations are provided, the description carries full burden, and it partially fulfills it.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph of four sentences, with the main action in the first sentence. No redundant information; every sentence adds necessary detail about inputs, outputs, and efficiency gains.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains what the tool returns for each entity type and the allowed input range (2–5 entities). This is sufficient for an agent to correctly invoke the tool and interpret results, covering key aspects of the tool's functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The tool description adds significant value by detailing the specific metrics returned for each 'type' value (e.g., revenue, net income for company; adverse-event count for drug), which is beyond the schema's simple enum description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Compare 2–5 entities side by side in one call.' It specifies two entity types (company, drug) and the metrics returned for each, distinguishing it from sibling tools like 'ask_pipeworx' or individual lookup tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description suggests when to use this tool by noting it 'replaces 8–15 sequential agent calls,' implying it is efficient for multi-entity comparisons. It does not explicitly mention when not to use it or alternative tools, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It describes expected behavior (search and return relevant tools) and implies read-only operation, but does not explicitly disclose side effects or safety profile. Adequate given simplicity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each earning its place: first defines purpose, second gives usage guidance. No wasted words; critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, the description covers all necessary context: what it does, what it returns (names and descriptions), and when to use it. No output schema needed; description sufficiently informs the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing basic descriptions for both parameters. The description adds value by giving an example for 'query' and stating default and max for 'limit', which enriches understanding beyond schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches a tool catalog by natural language description and returns relevant tools. It uses specific verb 'Search' and resource 'Pipeworx tool catalog', distinguishing it from sibling tools focused on other operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to call this tool first when 500+ tools are available, providing clear context for when to use it. Does not list alternatives or exclusions, but the directive 'FIRST' implies a prescribed workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that it returns citation URIs and replaces multiple calls. It is a read operation and no destructive behavior is implied. Could add note on potential limits, but transparent enough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four to five sentences, each with a clear purpose: purpose, content, output format, efficiency, alternative. No fluff, front-loaded with core function. Excellent structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description must explain return values, which it does (list of data types and URIs). Covers main use case with sufficient detail for agent to decide. Missing are potential data volume or error conditions, but not critical for a profile tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds value by clarifying that 'value' can be a ticker or CIK, confirms only 'company' is supported for type, and provides a fallback (resolve_entity) for names. This goes beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'get full profile' and the resource 'entity across every relevant Pipeworx pack'. It distinguishes from sibling tool 'usa_recipient_profile' and mentions replacing 10-15 sequential calls, making purpose very specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use: for a full profile. Provides exclusion: 'For federal contracts call usa_recipient_profile directly (too slow to bundle).' Also advises using resolve_entity if only a name is available, giving clear alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral aspects. It only says 'delete' but does not disclose impact, error handling, or whether deletion is permanent. Minimal transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, using a single sentence that gets straight to the point. No unnecessary words; it's appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple delete action with one parameter and no output schema, the description is fairly complete. However, it lacks details on error behavior or idempotency, which could be expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'key' described as 'Memory key to delete'. The description adds no extra semantics beyond the schema, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it deletes a stored memory by key, using a specific verb and resource. Among siblings like 'remember' and 'recall', the purpose is distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, or any context on prerequisites or limitations. The description only states what it does, not when to apply it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_breachAInspect
Fetch a single breach by name (the "Name" field from list_breaches, e.g., "Adobe", "LinkedIn"). Returns full breach metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Breach name (case-sensitive) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that the tool fetches data and returns 'full breach metadata', indicating a read operation with no side effects. No hidden behaviors are omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states the action and parameter, second states the return value. No redundant words, information is front-loaded and efficiently delivered.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool with no output schema, the description reasonably covers purpose, input, and output. Lacks mention of error handling for missing names, but overall adequate for the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The parameter schema has 100% coverage and describes 'name' as case-sensitive. The description adds further context by linking the name to list_breaches output and providing examples, enhancing semantic understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'Fetch', the resource 'single breach', and specifies the input parameter 'name' with examples. It distinguishes from the sibling 'list_breaches' by indicating it retrieves one breach by name, not a list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for use: use when you have a specific breach name from list_breaches. However, it does not explicitly state when not to use or mention alternatives, though sibling tools are distinct enough to avoid confusion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_breachesAInspect
List all publicly-known data breaches catalogued by HIBP. Optionally filter to a specific domain (e.g., "linkedin.com"). Returns name, title, breach date, added date, affected accounts, description, data classes exposed, and verification status.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | No | Restrict to a specific breached domain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With empty annotations, the description fully carries the burden. It states the tool lists public breaches and enumerates return fields, but doesn't explicitly confirm read-only nature or mention rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description lists key return fields and covers the tool's behavior completely for a simple list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description adds an example value ('linkedin.com') for the optional domain filter, adding value beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'List' and resource 'publicly-known data breaches catalogued by HIBP', clearly distinguishing it from sibling tools like get_breach which fetches a single breach.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies optional domain filtering but provides no explicit guidance on when to use this tool versus alternatives like get_breach or check_account, and no when-not scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_data_classesAInspect
Canonical list of HIBP "data class" tags (e.g., "Email addresses", "Passwords", "Geographic locations"). Useful for filtering breaches.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It describes the tool as returning a 'canonical list,' indicating a read-only, safe operation. No behavioral surprises are expected, though it does not disclose output format or pagination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences — the first states the tool's primary function with examples, the second adds practical context. Every word earns its place; no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless tool with no output schema, the description is fully complete. It tells the agent exactly what the tool does and why it's useful, leaving no ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters (coverage 100%), so the baseline is 4. The description adds meaning by explaining the content of the list and its purpose, going beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns the canonical list of HIBP data class tags, with concrete examples (e.g., 'Email addresses'). This uniquely distinguishes it from sibling tools like list_breaches, which list breaches rather than tags.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the tool is 'useful for filtering breaches,' giving some usage context, but it does not explicitly state when to use this versus alternatives (e.g., using known tags directly). It implies the purpose but lacks explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must cover behavioral traits. It discloses rate limiting and appropriate message content, but does not explain what happens after sending (e.g., if feedback is stored, if confirmation is issued, or if it is destructive). This leaves some uncertainty.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences with no wasted words. It is front-loaded with the primary purpose and uses clear, direct language. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple nature of the tool (3 params, no output schema), the description covers purpose, usage, content rules, and limits. It is missing details on post-submission behavior, but for a feedback tool this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds a content guideline but no deeper parameter semantics beyond what the schema already provides. It does not elaborate on the context object or enum values further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states clearly the purpose: 'Send feedback to the Pipeworx team.' It enumerates use cases (bug reports, feature requests, etc.), making the scope distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use context and content guidance ('do not include the end-user's prompt verbatim'). It also mentions the rate limit. It lacks explicit when-not-to-use or alternatives, but is sufficient given the unique purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations present, so description carries full burden. It discloses the tool reads stored memories, mentions session persistence, and implies no side effects. Sufficient for a simple read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler words. Purpose and usage are front-loaded. Every sentence provides necessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one optional parameter and no output schema, the description adequately covers purpose, usage, and context (session persistence). Could mention error handling but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers key parameter fully with description. The tool's description adds semantic value by explaining behavior when key is omitted (list all memories).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves a memory by key or lists all if key omitted. The verb 'retrieve' and resource 'memory' are specific. It distinguishes from sibling 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: 'Use this to retrieve context you saved earlier...' and notes the optional key behavior. It lacks explicit when-not-to-use but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It discloses parallel fan-out to multiple sources and the return format (structured changes, count, URIs). It does not mention authentication, rate limits, or error behavior, but the core behavior is well explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise and packed with information in a single paragraph. It front-loads the core purpose. Could benefit from slight restructuring (e.g., separate lines for use cases), but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately explains the return (structured changes, count, URIs). It covers supported parameters and provides usage examples. Missing details on error handling or limitations, but sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds significant value: explains that 'type' is restricted to company, provides examples and recommended defaults for 'since', and clarifies that 'value' can be ticker or CIK. This goes beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool's function: retrieving recent changes for an entity since a given time. It specifies the entity type (company) and the data sources fanned out to (SEC EDGAR, GDELT, USPTO), making it distinct from siblings like entity_profile or compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides explicit use cases: 'brief me on what happened with X' or change-monitoring workflows. It also suggests default values for typical monitoring ('30d' or '1m'). However, it doesn't explicitly state when not to use this tool or provide alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses behavior tied to authentication ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which is beyond what annotations would provide. Could mention overwrite or size limits, but current info is helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Action stated first, followed by usage context and behavioral notes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Low complexity tool with 2 simple parameters, no output schema. Description covers purpose, usage, and behavior adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds examples for 'key' and clarifies 'value' as general text. Schema already describes parameters, so description provides extra context without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action: 'Store a key-value pair in your session memory.' Differentiates from siblings like 'forget' and 'recall' by naming the specific resource and verb.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete use cases: 'save intermediate findings, user preferences, or context across tool calls.' Does not explicitly list alternatives, but the examples convey appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It discloses return of IDs and URIs but lacks details on error handling, idempotency, rate limits, or data source specifics. Adequate for basic understanding but gaps in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded, each sentence adds distinct value: purpose, type specifics, return format, and benefit. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 params, no output schema, no annotations, description covers purpose, parameter usage, and return types. Mentions stable citation. Lacks error descriptions but reasonably complete for a simple resolution tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both parameters described). Description adds value with concrete examples (ticker, CIK, name for company; brand/generic for drug), enhancing understanding beyond schema. No nested objects or output schema but sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool resolves entities to canonical IDs across Pipeworx data sources, specifies two entity types (company and drug) with examples, and notes it replaces multiple lookup calls. This is a specific verb+resource with clear differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for single-call resolution vs multiple lookups but does not explicitly state when not to use or compare to sibling tools like compare_entities or search tools. Sibling list shows no direct alternative for entity resolution, so guidance is adequate but not exhaustive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return format (verdict, structured form, actual value with citation, delta) and the efficiency claim (replaces 4–6 calls). However, it does not cover limitations, error handling, or auth needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the main purpose, and contains no fluff. It efficiently conveys purpose, scope, and a key benefit. Minor improvement could be structuring the output list for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single param, no output schema, no nested objects), the description covers input expectations, output items, supported domain, and value proposition. It is sufficient for an agent to understand usage, though maybe lacking error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'claim'. The description adds value by explaining the natural-language input with examples (e.g., 'Apple's FY2024 revenue was $400 billion'), going beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool validates claims against authoritative sources, specifies the domain (company-financial for US public companies), and lists return values. It differentiates by claiming to replace multiple agent calls, but does not explicitly name sibling tools for comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for financial claims and substitutes sequential calls, but it does not explicitly state when not to use it (e.g., for non-financial claims) or provide alternatives. Some guidance is present, but it lacks completeness.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!