Skip to main content
Glama

Server Details

Disify MCP — wraps Disify API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-disify
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 7 of 7 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, such as ask_pipeworx for general queries, check_domain/validate_email for email/domain validation, and remember/recall/forget for memory management. However, discover_tools overlaps somewhat with ask_pipeworx in helping users find tools, which could cause minor confusion in selection.

Naming Consistency4/5

The naming follows a consistent verb_noun pattern (e.g., ask_pipeworx, check_domain, validate_email) with clear action-oriented verbs. The only deviation is 'discover_tools', which uses a verb_noun format but stands out slightly as it's more exploratory compared to the others.

Tool Count5/5

With 7 tools, the count is well-scoped for a utility server covering querying, validation, tool discovery, and memory management. Each tool serves a clear function without being overwhelming or insufficient for the intended scope.

Completeness4/5

The tool set covers key areas like data querying, email/domain validation, tool discovery, and session memory management, providing a coherent utility surface. A minor gap is the lack of tools for more advanced data processing or integration beyond the basic query and validation functions, but core workflows are adequately supported.

Available Tools

7 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the tool acts as an intelligent intermediary ('Pipeworx picks the right tool, fills the arguments'), handles natural language input, and returns results. However, it lacks details on limitations like rate limits, error handling, or data source constraints, which would be helpful for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by elaboration on the mechanism and usage guidelines. Each sentence earns its place by adding clarity or examples without redundancy. The structure efficiently conveys essential information in a compact form.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (natural language processing with tool selection), no annotations, and no output schema, the description is reasonably complete. It explains the tool's behavior, usage, and parameter semantics effectively. However, it could improve by mentioning output format or potential limitations, as the absence of an output schema leaves return values unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the parameter's purpose beyond the schema's 'Your question or request in natural language': it emphasizes 'plain English' queries and provides concrete examples, enhancing understanding of what constitutes a valid question. This compensates well, though it doesn't detail format constraints beyond natural language.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'), distinguishing it from sibling tools like check_domain or validate_email that perform specific validations rather than natural language queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidelines: 'No need to browse tools or learn schemas — just describe what you need.' It includes examples ('What is the US trade deficit with China?', 'Look up adverse events for ozempic', 'Get Apple's latest 10-K filing') that illustrate when to use this tool versus alternatives, emphasizing its role for natural language queries rather than structured tool invocations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_domainBInspect

Check if a domain is associated with disposable or temporary email services. Returns risk assessment and classification.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesThe domain name to check, e.g. "mailinator.com".
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states what the tool does but doesn't describe how it works—such as whether it queries an external API, uses a local database, has rate limits, or returns specific result formats. This leaves significant behavioral gaps for the agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently conveys the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy for an agent to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate. It explains what the tool does but lacks details on behavioral traits, usage context, and output format, which could hinder an agent's ability to use it effectively in varied scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'domain' parameter well-documented in the schema itself. The description adds no additional parameter semantics beyond implying the domain is checked for disposable/temporary email associations, which aligns with the schema's purpose but doesn't provide extra details like format constraints or examples beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: checking if a domain is associated with disposable/temporary email services. It specifies the verb 'check' and resource 'domain', but doesn't explicitly differentiate from the sibling 'validate_email' tool, which likely validates email addresses rather than domain types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the sibling 'validate_email' tool. The description implies usage for domain checking, but offers no explicit context, exclusions, or alternatives, leaving the agent to infer when this specific check is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it's a search operation (implied read-only), returns 'the most relevant tools with names and descriptions', and suggests it's for initial discovery. However, it doesn't mention potential limitations like rate limits, authentication needs, or error handling. For a tool with no annotations, this is good but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose, followed by usage guidance. Every sentence earns its place: the first explains what the tool does, and the second tells when to use it. There is zero waste or redundancy, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search with natural language query), no annotations, no output schema, and 100% schema coverage, the description is mostly complete. It covers purpose and usage well but lacks details on output format (beyond 'names and descriptions') and behavioral constraints. For a discovery tool, this is sufficient but could be enhanced with more context on results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (query and limit) thoroughly. The description adds no additional parameter semantics beyond what's in the schema. According to the rules, with high schema coverage (>80%), the baseline is 3 even with no param info in the description, which applies here.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search the Pipeworx tool catalog'), the resource ('tool catalog'), and the method ('by describing what you need'). It distinguishes this tool from its siblings (check_domain, validate_email) by focusing on tool discovery rather than domain or email validation. The purpose is explicit and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear context (large tool catalog) and a specific recommendation (use first), which helps the agent distinguish it from alternatives. No exclusions are mentioned, but the guidance is strong and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'Delete', implying a destructive mutation, but lacks details on permissions needed, whether deletion is permanent or reversible, error handling (e.g., if the key doesn't exist), or side effects. This is a significant gap for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It is front-loaded with the core action ('Delete'), making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's destructive nature, no annotations, and no output schema, the description is incomplete. It fails to address critical context such as what 'delete' entails (permanent removal?), authentication requirements, or what happens on success/failure. For a mutation tool with minimal structured data, this leaves the agent under-informed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema fully documents the single parameter 'key'. The description adds no additional parameter semantics beyond implying deletion targets a memory by its key, which aligns with the schema. With zero parameters needing extra explanation, a baseline of 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Delete') and resource ('a stored memory by key'), distinguishing it from sibling tools like 'recall' (likely retrieves) and 'remember' (likely creates). It precisely communicates the tool's function without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., that a memory must exist to delete it), exclusions, or comparisons to siblings like 'recall' or 'remember', leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: retrieving memories by key, listing all memories when key is omitted, and accessing memories from current or previous sessions. It doesn't mention error handling, performance characteristics, or data persistence details, but covers the core operational behavior adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence states the core functionality with parameter guidance, and the second provides usage context. There's zero waste or redundancy, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with 1 parameter and 100% schema coverage but no output schema, the description provides good contextual completeness. It explains what the tool does, when to use it, and how parameters affect behavior. The main gap is lack of information about return format or error conditions, which would be helpful given no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so the baseline is 3. The description adds meaningful context by explaining the semantic effect of omitting the key parameter ('omit to list all keys') and connecting parameter usage to the tool's purpose ('retrieve a previously stored memory by key'). This provides valuable guidance beyond the schema's technical specification.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings like 'remember' (store) and 'forget' (delete) by focusing on retrieval operations. The description explicitly mentions retrieving context saved earlier in sessions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter to list all memories versus when to include it for specific retrieval. This gives clear context for when to use this tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it explains persistence differences (authenticated vs. anonymous sessions with 24-hour limit) and the cross-tool call context. However, it doesn't mention potential limitations like storage size, rate limits, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by usage context and behavioral details. Every sentence earns its place with no wasted words, making it highly efficient and scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with no annotations and no output schema, the description provides good context about persistence behavior and usage scenarios. However, it doesn't explain what happens on successful storage (confirmation? stored value?) or potential error cases, leaving some gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters. The description doesn't add any additional parameter semantics beyond what's in the schema properties, so it meets the baseline for high schema coverage without providing extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('store') and resource ('key-value pair in your session memory'), distinguishing it from siblings like 'recall' (retrieve) and 'forget' (remove). It explicitly identifies the tool's function as persistent storage with session context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use it ('save intermediate findings, user preferences, or context across tool calls'), but doesn't explicitly mention when NOT to use it or name alternatives like 'forget' for removal or 'recall' for retrieval. The guidance is helpful but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_emailBInspect

Verify an email address is properly formatted, has valid DNS records, and isn't disposable or an alias. Returns validation status and risk flags.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesThe email address to validate.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It lists the types of checks performed (formatting, DNS, disposable, alias), which is helpful, but lacks critical details: it doesn't specify whether this is a read-only operation, what the output format looks like, error handling, rate limits, or authentication requirements. For a validation tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Validate an email address') and enumerates the specific checks without redundancy. Every word earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate. It covers what the tool does but lacks output details, error handling, and differentiation from siblings. With no output schema, the description should ideally hint at return values, but it doesn't, leaving gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'email' parameter clearly documented. The description adds no additional parameter semantics beyond what the schema provides (e.g., it doesn't clarify email format expectations or validation rules). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: validating email addresses with specific checks (formatting, DNS, disposable status, alias detection). It uses specific verbs ('validate', 'check') and identifies the resource ('email address'). However, it doesn't explicitly differentiate from the sibling tool 'check_domain', which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'check_domain'. It mentions what the tool does but offers no context about appropriate use cases, prerequisites, or exclusions. This leaves the agent without direction on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.