Skip to main content
Glama
rawr-ai

Filesystem MCP Server

xml_to_json_string

Convert XML files to JSON strings directly for quick content inspection without file creation. Specify input path and max bytes to read. Supports optional XML parsing settings like preserving property order.

Instructions

Convert an XML file to a JSON string and return it directly. This is useful for quickly inspecting XML content as JSON without creating a new file. Requires maxBytes parameter (default 10KB). Uses fast-xml-parser for conversion. The input path must be within allowed directories. This tool is fully functional in both readonly and write modes (respecting maxBytes) since it only reads the XML file and returns the parsed data.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
maxBytesYesMaximum bytes to read from the XML file. Must be a positive integer. Handler default: 10KB.
optionsNo
xmlPathYesPath to the XML file to convert

Implementation Reference

  • The core handler function implementing the xml_to_json_string tool: validates input, reads XML file, parses it to JSON object using fast-xml-parser, stringifies with formatting, applies optional size truncation for large outputs, and returns as text content.
    export async function handleXmlToJsonString(
      args: unknown,
      allowedDirectories: string[],
      symlinksMap: Map<string, string>,
      noFollowSymlinks: boolean
    ) {
      const parsed = parseArgs(XmlToJsonStringArgsSchema, args, 'xml_to_json_string');
    
      const { xmlPath, maxBytes, options } = parsed;
      const validXmlPath = await validatePath(xmlPath, allowedDirectories, symlinksMap, noFollowSymlinks);
      
      try {
        // Read the XML file (no input size gating; limit only output)
        const xmlContent = await fs.readFile(validXmlPath, "utf-8");
        
        // Parse XML to JSON
        const parserOptions = {
          ignoreAttributes: options?.ignoreAttributes ?? false,
          preserveOrder: options?.preserveOrder ?? true,
          // Add other options as needed
        };
        
        const parser = new XMLParser(parserOptions);
        const jsonObj = parser.parse(xmlContent);
        
        // Return the JSON as a string
        let jsonContent = JSON.stringify(jsonObj, null, 2);
    
        // Apply response-size cap
        const responseLimit = (parsed as any).maxResponseBytes ?? maxBytes ?? 200 * 1024; // default 200KB
        if (typeof responseLimit === 'number' && responseLimit > 0) {
          const size = Buffer.byteLength(jsonContent, 'utf8');
          if (size > responseLimit) {
            const summary = {
              _meta: {
                truncated: true,
                originalSize: size,
                note: `JSON too large; summarizing to fit ${responseLimit} bytes.`
              },
              sample: Array.isArray(jsonObj) ? jsonObj.slice(0, 5) : (typeof jsonObj === 'object' ? Object.fromEntries(Object.entries(jsonObj).slice(0, 100)) : jsonObj)
            };
            jsonContent = JSON.stringify(summary, null, 2);
          }
        }
        
        return {
          content: [{ type: "text", text: jsonContent }],
        };
      } catch (error) {
        const errorMessage = error instanceof Error ? error.message : String(error);
        throw new Error(`Failed to convert XML to JSON: ${errorMessage}`);
      }
    }
  • TypeBox schema defining the input parameters for the tool: xmlPath (required), maxBytes/maxResponseBytes (optional limits), options for parser behavior.
    export const XmlToJsonStringArgsSchema = Type.Object({
      xmlPath: Type.String({ description: 'Path to the XML file to convert' }),
      maxBytes: Type.Optional(Type.Integer({
        minimum: 1,
        description: '[Deprecated semantics] Previously limited file bytes read; now treated as a response size cap in bytes.'
      })),
      maxResponseBytes: Type.Optional(Type.Integer({
        minimum: 1,
        description: 'Maximum size, in bytes, of the returned JSON string. Parsing reads full file; response may be truncated to respect this limit.'
      })),
      options: Type.Optional(
        Type.Object({
          ignoreAttributes: Type.Boolean({ default: false, description: 'Whether to ignore attributes in XML' }),
          preserveOrder: Type.Boolean({ default: true, description: 'Whether to preserve the order of properties' })
        }, { default: {} })
      )
    });
    export type XmlToJsonStringArgs = Static<typeof XmlToJsonStringArgsSchema>;
  • index.ts:269-270 (registration)
    Maps the tool name 'xml_to_json_string' to its handler function in the toolHandlers object, which is used during server tool registration.
    xml_to_json_string: (a: unknown) =>
      handleXmlToJsonString(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  • index.ts:324-324 (registration)
    Declares the tool metadata (name and description) in the allTools array, which determines available tools based on permissions.
    { name: "xml_to_json_string", description: "XML to JSON string" },
  • Re-exports and maps the schema to the tool name in the central toolSchemas object used by the server.
    xml_to_json_string: XmlToJsonStringArgsSchema,
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it's a read-only operation ('only reads the XML file'), has a size constraint ('maxBytes parameter'), uses a specific parser ('fast-xml-parser'), and has path restrictions ('within allowed directories'). It could improve by mentioning error handling or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. Each subsequent sentence adds useful context about parameters, implementation, and constraints. Minor redundancy exists in mentioning maxBytes twice, but overall it's efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with no annotations and no output schema, the description covers the essential behavior and constraints adequately. However, it lacks details about the JSON output format, error conditions, or how the conversion handles malformed XML, which would be helpful given the missing structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67%, and the description adds some value by explaining the purpose of maxBytes ('default 10KB') and mentioning the xmlPath requirement. However, it doesn't elaborate on the options parameter's semantics beyond what the schema provides, leaving gaps in understanding the conversion behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Convert an XML file to a JSON string and return it directly') and distinguishes it from siblings like xml_query or xml_structure by emphasizing direct conversion without file creation. It explicitly mentions the resource (XML file) and output format (JSON string).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('quickly inspecting XML content as JSON without creating a new file'), which differentiates it from file-reading or querying siblings. However, it doesn't explicitly state when NOT to use it or name specific alternatives like xml_query for more complex operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rawr-ai/mcp-filesystem'

If you have feedback or need assistance with the MCP directory API, please join our Discord server