diff-patch-tools
Server Details
Cloudflare Workers MCP server: diff-patch-tools
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- lazymac2x/diff-patch-tools-api
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 6 of 6 tools scored.
Each tool has a clearly defined purpose: apply_patch applies patches, diff_stats provides statistics, json_diff compares JSON, patch_validate checks applicability, text_diff generates diffs, and three_way_merge performs merges. No two tools overlap in functionality.
All tool names follow a consistent snake_case verb_noun pattern (e.g., apply_patch, text_diff). The naming is predictable and clearly indicates the operation and target.
With 6 tools, the server covers the essential operations for a diff/patch toolkit without being bloated. Each tool serves a necessary and distinct role, making the set well-scoped.
The toolkit includes core capabilities: generating diffs, applying patches, validating patches, computing statistics, comparing JSON, and three-way merging. A minor gap is the absence of a tool for reverse patching, but the set is largely complete for common use cases.
Available Tools
6 toolsapply_patchBInspect
Apply a unified diff patch to an original text and return the patched result.
| Name | Required | Description | Default |
|---|---|---|---|
| patch | Yes | Unified diff patch string | |
| original | Yes | Original text to patch |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Only states it applies a patch and returns result; lacks details on error handling, malformed patches, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no redundant information. Efficiently conveys the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple patch application, description is adequate but lacks behavioral details (e.g., behavior on invalid patch) and has no output schema. Could be more helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds no additional meaning beyond what schema already provides for 'original' and 'patch' strings.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action (apply a unified diff patch) and the inputs/outputs (original text and patched result). Distinguishes from sibling tools which are about diff generation, validation, or merge.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like patch_validate or three_way_merge. No explicit when-to-use or when-not-to-use information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
diff_statsAInspect
Parse a unified diff string and return line-level statistics: additions, deletions, hunks, files changed.
| Name | Required | Description | Default |
|---|---|---|---|
| diff | Yes | Unified diff string to analyse |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility. It mentions parsing and outputting statistics but does not disclose behavior like read-only nature, error handling, input validation, or any side effects. This leaves significant gaps for the agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence that efficiently conveys the tool's purpose and output without wasted words. Every word contributes meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explicitly lists the key output components (additions, deletions, hunks, files changed). It is mostly complete for a simple statistics tool, though it could mention the return format or edge cases like invalid input.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'diff' parameter described as 'Unified diff string to analyse'. The description repeats this same information, adding no extra semantic value beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool parses a unified diff and returns line-level statistics (additions, deletions, hunks, files changed). It uses a specific verb+resource and distinguishes from siblings like 'apply_patch' and 'json_diff' which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when diff statistics are needed but does not explicitly state when to use or when not to use this tool versus alternatives like 'patch_validate' or 'text_diff'. No usage context or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
json_diffAInspect
Compute a structural diff between two JSON values, showing added, removed, and replaced fields.
| Name | Required | Description | Default |
|---|---|---|---|
| modified | Yes | Modified JSON value (object, array, or JSON string) | |
| original | Yes | Original JSON value (object, array, or JSON string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It only states the output (added, removed, replaced fields) but does not disclose behavior for edge cases like type mismatches, deeply nested values, or handling of JSON strings. No mention of performance or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 16 words, front-loaded with the core action. Every word contributes value; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 parameters and no output schema, the description is adequate but not complete. It lacks details on the returned diff format, error handling, and behavior for different JSON types (objects, arrays, strings). Additional context would improve agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond the schema's property descriptions ('Original JSON value', 'Modified JSON value'). It does not clarify format or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'compute' and the resource 'structural diff between two JSON values'. It specifies the output as 'added, removed, and replaced fields', which distinguishes it from sibling tools like text_diff (text diff) and diff_stats (statistics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for comparing JSON structures but provides no explicit guidance on when to use this tool versus alternatives like text_diff or diff_stats. It does not mention exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
patch_validateAInspect
Check whether a unified diff patch can be applied cleanly to an original text without actually applying it.
| Name | Required | Description | Default |
|---|---|---|---|
| patch | Yes | Unified diff patch string | |
| original | Yes | Original text |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must cover behavioral traits. It states the tool only checks without applying, which is transparent. However, it lacks details on what 'cleanly' means, error handling, or edge cases (e.g., whitespace handling). Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, front-loaded with the core purpose, and contains no redundant words. Every part is necessary and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two parameters and no output schema, the description is mostly complete. However, it does not mention the return value (likely a boolean or status), which would help an agent understand what to expect. This is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with 'original' and 'patch' fields well described. The description adds minimal extra meaning beyond the schema (e.g., specifying 'unified diff patch'). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'check', the resource 'unified diff patch' and 'original text', and the specific action 'can be applied cleanly without actually applying it'. This distinguishes it from siblings like 'apply_patch' (which applies) and 'diff_stats' (which provides statistics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates usage: before applying a patch to verify clean application. However, it does not explicitly state when not to use (e.g., if you need to apply the patch) or mention alternatives like 'apply_patch'. Still, the context is clear given the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
text_diffAInspect
Generate a unified diff between two text strings. Ideal for comparing file versions before and after edits.
| Name | Required | Description | Default |
|---|---|---|---|
| context | No | Context lines around each change (default 3) | |
| modified | Yes | Modified text | |
| original | Yes | Original text | |
| modified_file | No | Label for the modified file (default "modified") | |
| original_file | No | Label for the original file (default "original") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description is straightforward and implies safe operation (read-only), but does not disclose any potential side effects, error cases, or output format details. Lacks behavioral depth beyond the core function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise, front-loaded sentences with no filler. Every word adds value: first sentence states purpose, second gives use case.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks explicit mention of return value (output schema absent), but the purpose and inputs are well-covered. Suitable for a simple diff tool, though output format could be specified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions. The description adds no additional semantic meaning beyond what the schema already provides, so baseline score is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb ('Generate') and resource ('unified diff between two text strings'). Distinguishes from siblings like 'apply_patch', 'diff_stats', and 'json_diff' by focusing on text string comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context ('Ideal for comparing file versions before and after edits') but lacks explicit when-not-to-use or mention of alternatives (e.g., diff_stats for statistics).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
three_way_mergeAInspect
Perform a three-way merge of base, ours, and theirs. Conflicts are marked with standard diff3 conflict markers.
| Name | Required | Description | Default |
|---|---|---|---|
| base | Yes | Common ancestor text | |
| ours | Yes | Our version of the text | |
| theirs | Yes | Their version of the text | |
| ours_label | No | Label for ours in conflict markers (default "ours") | |
| theirs_label | No | Label for theirs in conflict markers (default "theirs") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations were provided, so the description carries full burden. It mentions conflict markers as a behavioral detail, but does not disclose return value format, error handling, or any side effects like destructive behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences convey core purpose and a key behavioral aspect (conflict markers). No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No annotations or output schema, so description needs to be self-sufficient. It lacks explanation of return value (e.g., full merged text), error handling for unresolvable conflicts, and any prerequisite context. Adequate for a simple tool but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by explaining that labels are for conflict markers and defaults are 'ours' and 'theirs', which goes beyond the schema's generic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Perform a three-way merge' which is specific verb and resource. It distinguishes from sibling diff/patch tools by mentioning merge and diff3 conflict markers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for merging three text versions but does not explicitly state when to use this tool over alternatives like apply_patch or text_diff. No exclusions or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!