Skip to main content
Glama

Server Details

Cloudflare Workers MCP server: data-transform

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
lazymac2x/data-transform-api
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: conversion between formats, filtering, sorting, statistics, flattening, field picking, and validation. No two tools overlap in functionality.

Naming Consistency4/5

Most tool names follow a verb_noun or noun_to_noun pattern, but there is some variation: 'stats' is an abbreviation and 'flatten/unflatten' are single verbs. Overall, the naming is readable and mostly consistent.

Tool Count5/5

With 10 tools covering essential data transformations (CSV, JSON, XML, filtering, sorting, stats, etc.), the count is well-scoped and each tool earns its place.

Completeness4/5

The tool set covers basic CRUD-like operations on data and common conversions. Minor gaps exist, such as lack of aggregation or merging, but the surface is largely complete for typical data transform tasks.

Available Tools

10 tools
csv_to_jsonBInspect

Parse CSV text into a JSON array of objects.

ParametersJSON Schema
NameRequiredDescriptionDefault
csvYesCSV text to parse
delimiterNoCSV delimiter (default: comma)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry behavioral transparency. It only says 'parse CSV' without disclosing error handling, delimiter behavior, or how malformed input is treated. Minimal disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded, no unnecessary words. Effective for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with no output schema; description gives enough: input CSV, output JSON array. Could mention edge cases but not necessary given low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description adds little beyond schema (e.g., 'CSV delimiter (default: comma)' is already in schema). Baseline 3 applies as extra detail is minimal.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool parses CSV text into a JSON array of objects, a specific verb and resource. It distinguishes from siblings like json_to_csv (reverse) and json_to_xml (different format).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as json_to_csv, filter, or sort. The description only states what it does, not the context or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

filterAInspect

Filter a JSON array by a query object. Supports >, <, ! prefixes for comparisons.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesArray of objects to filter
queryYesFilter criteria (e.g. {"age":">30","status":"active"})
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It mentions prefix support (>, <, !) but omits details like mutation behavior, handling of missing keys, or error responses. Partial disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no redundancy. Every word adds value, and the key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, so the description should clarify the return value (likely a filtered array) and mention error handling. It lacks this context, making it slightly incomplete for a tool used in data pipelines.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions cover both parameters (100% coverage). The description adds value by explaining the query object supports comparison prefixes, which is not in the schema, enhancing parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it filters a JSON array using a query object, and adds detail about comparison prefixes. This distinguishes it from sibling tools like 'sort' or 'flatten', though it could be clearer about returning a new array.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'pick' or 'validate'. The purpose is implicit from the name, but the description does not address exclusion scenarios or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flattenBInspect

Flatten a nested JSON object into dot-notation keys.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesNested object to flatten
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description lacks details on handling of arrays, conflict resolution, depth limits, or output format, leaving behavioral uncertainty.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One concise sentence with front-loaded purpose, though it could include more useful information without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description does not specify return format or handle edge cases, which is adequate for a simple tool but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (single parameter with description), but the description adds minimal beyond schema—only mentioning dot-notation keys in output, not parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action 'flatten' and the resource 'nested JSON object' into dot-notation keys, which distinguishes it from sibling tools like unflatten.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., when not to use, or comparison with unflatten or filter).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

json_to_csvCInspect

Convert a JSON array of objects to CSV text. Optionally specify a delimiter.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesArray of objects to convert
delimiterNoCSV delimiter (default: comma)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavior like handling of missing keys, nested objects, or quoting rules, but it only states the basic conversion and optional delimiter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise, consisting of two short sentences with no extraneous information. It is well-structured but may be too brief.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description omits important details such as how output CSV handles headers, quoting, or special characters. Given the lack of output schema and annotations, this is insufficient for complex inputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, but the description adds no extra meaning beyond the schema—it merely restates the optional delimiter. No details on default delimiter or validation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts a JSON array of objects to CSV text, with an optional delimiter. This distinguishes it from sibling tools like csv_to_json (reverse) and json_to_xml (different format).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like csv_to_json or other converters, nor any prerequisites or context for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

json_to_xmlBInspect

Convert a JSON object or array to XML string.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesJSON data to convert to XML
rootNameNoRoot element name (default: root)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior. It only states the conversion without details on handling nested objects, arrays, attributes, error cases, or output format specifics. This is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is concise and front-loaded. However, it could be slightly more structured (e.g., listing parameters) to improve scannability. Still efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple conversion tool, the description covers the core function. However, missing details like return format (string), handling of arrays, and default behavior for rootName reduce completeness. Without output schema, more explanation is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The tool description adds no new meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Convert' and the resource 'JSON object or array to XML string'. It precisely describes the tool's function, distinguishing it from siblings like csv_to_json or json_to_csv.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use or avoid this tool. No mention of scenarios, limitations, or comparisons with siblings. The description lacks any usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pickAInspect

Pick specific fields from each object in a JSON array.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesArray of objects
fieldsYesField names to keep
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosure. It states the core action (picking fields) but does not discuss edge cases like missing fields, empty arrays, or field ordering. This is adequate for a simple transformation but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (7 words) and front-loaded with the verb 'Pick'. Every word is necessary, with no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and lack of output schema, the description is minimally adequate. It explains the core functionality but omits details about behavior on missing fields or empty arrays, which would be helpful for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides full coverage (100%) for both parameters ('data' and 'fields'). The description adds no additional meaning beyond what the schema states, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb-resource pair ('Pick specific fields from each object in a JSON array') that clearly indicates the tool's function and distinguishes it from siblings like 'filter' (which filters objects) and 'flatten' (which restructures nested data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'filter', 'flatten', or 'sort'. There is no mention of context, prerequisites, or exclusion criteria, leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sortBInspect

Sort a JSON array by a field.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesArray of objects to sort
fieldYesField name to sort by
orderNoSort order (default: asc)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations present. Description does not disclose sorting stability, handling of missing fields, mutation behavior, or default order.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. However, it lacks structured details like examples or defaults.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description doesn't mention return format, error behavior, or edge cases. Incomplete for a sorting tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% so baseline is 3. Description adds no extra meaning beyond schema (e.g., default order 'asc' is not mentioned).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'sort', the resource 'JSON array', and the operation 'by a field'. It distinguishes from sibling tools like filter or pick.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives (e.g., filter, pick). No exclusions or contexts provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

statsAInspect

Compute statistics (count, sum, mean, min, max, median) for a numeric field.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesArray of objects
fieldYesNumeric field to compute stats for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and description fails to disclose handling of non-numeric fields, empty arrays, or error conditions, leaving significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with purpose, no extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers purpose and basic behavior, but lacks details on edge cases, output format, or assumptions about input data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and description adds value by listing the specific statistics computed (count, sum, mean, etc.), exceeding schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool computes statistics (count, sum, mean, min, max, median) for a numeric field, effectively distinguishing it from sibling tools like filter or sort.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or alternatives; usage is implied for numeric analysis, but no context on when not to use or which sibling might be better.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unflattenAInspect

Unflatten a dot-notation JSON object back into nested structure.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesFlat dot-notation object to unflatten
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It only states the basic operation without disclosing handling of edge cases like key collisions, array support, or type preservation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence that is efficient and contains no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is adequate. It explains the core function, though additional details on behavior could enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter description. The description reinforces the purpose but does not add extra meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool 'unflatten a dot-notation JSON object back into nested structure', providing a specific verb and resource. The distinction from the sibling 'flatten' is implied by the opposite operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage as the inverse of flattening but does not explicitly specify when to use this tool versus alternatives like 'flatten' or other transformation tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validateBInspect

Validate a JSON string and return type, size, or error.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesJSON string to validate
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavioral traits. It mentions returning type, size, or error but lacks details on validation criteria, performance, or whether the tool is fully read-only (likely but not stated).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no redundancy, efficiently conveying the core purpose and output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (1 parameter, no output schema), the description adequately explains what the tool does and what it returns. It could be more precise about 'type' and 'size' but is sufficient for basic understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers the single parameter completely (100% coverage). The description adds context about the return values but does not improve parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool validates a JSON string and returns type, size, or error. It distinguishes from sibling tools like csv_to_json or filter, which perform different operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for validating JSON strings but provides no explicit guidance on when to use vs alternatives, nor any preconditions or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.