Server Details
An MCP server for converting data between 16 formats including JSON, YAML, TOML, CSV, XML, Markdown, HTML, Excel, PDF, and more. Supports single transforms, batch fan-out to multiple targets in one call, and JSON reshaping with dot-notation path mapping. 240 conversion pairs, zero config.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
See and control every tool call
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsbatch_transformbatch_transformARead-onlyIdempotentInspect
Convert multiple data items in one call. Each item specifies sourceFormat, targetFormat, and data. Returns per-item results with success/failure. Maximum 50 items per batch.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral details absent from annotations: per-item success/failure results and the 50-item batch limit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four dense sentences with no filler; front-loaded purpose followed by constraints, structure, and return behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers batch-specific concerns (size limits, partial failure handling) despite missing output schema, though error handling specifics could be clearer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Compensates effectively for 0% schema coverage by specifying that items contain sourceFormat/targetFormat/data objects, though schema types this as string.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States clear action (convert multiple data items) and distinguishes from single-item 'transform' sibling via 'batch' and 'one call' phrasing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete constraint (50 item max) but lacks explicit guidance on when to prefer this over single-item 'transform' alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
detect_formatdetect_formatBRead-onlyIdempotentInspect
Auto-detect the format of the provided data. Returns the detected format name (e.g. 'json', 'csv', 'xml') or 'unknown'. Useful when the source format is not known ahead of time.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds return value specifics ('json', 'csv', 'unknown') beyond annotations but omits other behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Compact, well-ordered sentences with no fluff; front-loaded with action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool; compensates for missing output schema by describing return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Fails to describe the 'data' parameter semantics despite 0% schema description coverage; unclear if it expects raw content, file path, etc.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the detection purpose with specific examples and implicitly distinguishes from transformation siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides positive usage condition ('when source format not known') but lacks negative conditions or alternative comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_capabilitieslist_capabilitiesARead-onlyIdempotentInspect
List all supported data formats and the total number of conversion pairs available.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes what content is returned (formats and count), adding to annotations, but omits other behavioral details like caching or response structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with verb, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Since no output schema exists, description adequately explains return values (formats and pairs), though could specify data structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters warrants baseline 4; no parameter semantics needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb and objects (lists formats and conversion pairs), implicitly distinguishes from action-oriented siblings (transform, detect, reshape).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use vs alternatives or when not to use; lacks explicit discovery context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reshape_jsonreshape_jsonBRead-onlyIdempotentInspect
Restructure JSON using dot-notation path mapping. Example: mapping {"name": "user.firstName"} extracts nested fields.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | ||
| mapping | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Aligns with annotations (read-only extraction) but omits behavioral details like handling of missing paths or validation errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise and front-loaded; every word serves a purpose, though brevity leaves critical parameter documentation gaps.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers core functionality but incomplete due to missing 'data' parameter specification and absence of output/error behavior description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The example clarifies the 'mapping' structure significantly, but 'data' parameter format (JSON string vs object) is unexplained despite 0% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (restructure) and mechanism (dot-notation path mapping) with concrete example, distinguishing it from generic transform siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides usage example but lacks guidance on when to choose this tool over siblings like 'transform' or 'batch_transform'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transformtransformARead-onlyIdempotentInspect
Convert data between formats. Pass the source format, target format, and raw data. Any format can be converted to any other. Supported formats: json, csv, tsv, xml, yaml, toml, html (table), markdown (table), properties, plain_text, base64, url_encoded, hex, pdf, excel, docx.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | ||
| sourceFormat | Yes | ||
| targetFormat | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable capability context ('Any format can be converted to any other' and exhaustive supported format list) beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured and front-loaded; format enumeration is lengthy but necessary for completeness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic invocation but lacks error handling details or output format expectations given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Compensates well for 0% schema coverage by explicitly mapping all three parameters ('source format, target format, and raw data') to their semantic meanings.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Convert') and resource ('data between formats') with comprehensive format list, but does not explicitly differentiate from batch_transform sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus batch_transform, detect_format, or reshape_json alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Verify Ownership
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [
{
"email": "your-email@example.com"
}
]
}The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.
Sign in to verify ownershipControl your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!