Skip to main content
Glama
rawr-ai

Filesystem MCP Server

json_transform

Modify JSON data by applying sequential operations such as mapping, grouping, sorting, flattening, and field selection. Requires specified file path, operations, and size limit for secure processing.

Instructions

Transform JSON data using a sequence of operations. Supports operations like mapping array elements, grouping by fields, sorting, flattening nested arrays, and picking/omitting fields. Requires maxBytes parameter (default 10KB). Operations are applied in sequence to transform the data structure. The path must be within allowed directories.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
maxBytesYesMaximum bytes to read from the file. Must be a positive integer. Handler default: 10KB.
operationsYesArray of transformation operations to apply in sequence
pathYesPath to the JSON file to transform

Implementation Reference

  • The core handler function for the 'json_transform' tool. It validates input arguments, reads and parses a JSON file, applies a sequence of transformation operations (map, groupBy, sort, flatten, pick, omit) based on the provided operations array, and returns the transformed JSON as a formatted string.
    export async function handleJsonTransform(
      args: unknown,
      allowedDirectories: string[],
      symlinksMap: Map<string, string>,
      noFollowSymlinks: boolean
    ) {
      const parsed = parseArgs(JsonTransformArgsSchema, args, 'json_transform');
    
      const validPath = await validatePath(parsed.path, allowedDirectories, symlinksMap, noFollowSymlinks);
      let jsonData = await readJsonFile(validPath, parsed.maxBytes);
    
      try {
        // Apply operations in sequence
        for (const op of parsed.operations) {
          switch (op.type) {
            case 'map':
              if (!Array.isArray(jsonData)) {
                throw new Error('Data must be an array for map operation');
              }
              if (!op.field) {
                throw new Error('Field is required for map operation');
              }
              jsonData = jsonData.map(item => getProp(item, op.field!));
              break;
    
            case 'groupBy':
              if (!Array.isArray(jsonData)) {
                throw new Error('Data must be an array for groupBy operation');
              }
              if (!op.field) {
                throw new Error('Field is required for groupBy operation');
              }
              jsonData = groupBy(jsonData, op.field);
              break;
    
            case 'sort':
              if (!Array.isArray(jsonData)) {
                throw new Error('Data must be an array for sort operation');
              }
              if (!op.field) {
                throw new Error('Field is required for sort operation');
              }
              jsonData = orderBy(
                jsonData,
                op.field,
                [op.order || 'asc']
              );
              break;
    
            case 'flatten':
              if (!Array.isArray(jsonData)) {
                throw new Error('Data must be an array for flatten operation');
              }
              jsonData = flattenDeep(jsonData);
              break;
    
            case 'pick':
              if (!op.fields || !op.fields.length) {
                throw new Error('Fields array is required for pick operation');
              }
              if (Array.isArray(jsonData)) {
                jsonData = jsonData.map(item => pick(item, op.fields!));
              } else {
                jsonData = pick(jsonData, op.fields);
              }
              break;
    
            case 'omit':
              if (!op.fields || !op.fields.length) {
                throw new Error('Fields array is required for omit operation');
              }
              if (Array.isArray(jsonData)) {
                jsonData = jsonData.map(item => omit(item, op.fields!));
              } else {
                jsonData = omit(jsonData, op.fields);
              }
              break;
          }
        }
    
        return {
          content: [{ 
            type: "text", 
            text: JSON.stringify(jsonData, null, 2)
          }],
        };
      } catch (error) {
        if (error instanceof Error) {
          throw new Error(`JSON transformation failed: ${error.message}`);
        }
        throw error;
      }
    }
  • TypeBox schema definition for JsonTransformArgs, including path to JSON file, array of operations (map, groupBy, sort, flatten, pick, omit), and maxBytes limit. Defines the input validation for the tool.
    export const JsonTransformArgsSchema = Type.Object({
      path: Type.String({ description: 'Path to the JSON file to transform' }),
      operations: Type.Array(
        Type.Object({
          type: Type.Union([
            Type.Literal('map'),
            Type.Literal('groupBy'),
            Type.Literal('sort'),
            Type.Literal('flatten'),
            Type.Literal('pick'),
            Type.Literal('omit')
          ], { description: 'Type of transformation operation' }),
          field: Type.Optional(Type.String({ description: 'Field to operate on (if applicable)' })),
          order: Type.Optional(Type.Union([Type.Literal('asc'), Type.Literal('desc')], { description: 'Sort order (if applicable)' })),
          fields: Type.Optional(Type.Array(Type.String(), { description: 'Fields to pick/omit (if applicable)' }))
        }),
        { minItems: 1, description: 'Array of transformation operations to apply in sequence' }
      ),
      maxBytes: Type.Integer({
        minimum: 1,
        description: 'Maximum bytes to read from the file. Must be a positive integer. Handler default: 10KB.'
      })
    });
    export type JsonTransformArgs = Static<typeof JsonTransformArgsSchema>;
  • index.ts:287-288 (registration)
    Registers the handler function for 'json_transform' in the toolHandlers object, wrapping it with context (allowedDirectories, symlinksMap, noFollowSymlinks). This is used in the server.addTool loop.
    json_transform: (a: unknown) =>
      handleJsonTransform(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  • index.ts:330-330 (registration)
    Includes 'json_transform' in the allTools array with its name and description, which determines if it's enabled based on permissions and passed to server.addTool.
    { name: "json_transform", description: "Transform JSON" },
  • Maps the 'json_transform' tool name to its JsonTransformArgsSchema in the central toolSchemas export, used by index.ts for parameter schema in tool registration.
    json_transform: JsonTransformArgsSchema,
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds some context: it specifies that operations are applied in sequence and mentions the maxBytes parameter with a default (10KB), which hints at performance constraints. However, it lacks details on error handling, output format, memory limits, or side effects (e.g., whether it modifies files or just returns transformed data). This partial disclosure is adequate but leaves gaps for a mutation-like tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise and front-loaded: the first sentence states the core purpose, followed by supporting details in a logical flow. However, the second sentence lists operations without prioritization, and the third mixes parameter info with path constraints, slightly reducing clarity. Overall, it's efficient with minimal waste, though minor restructuring could improve readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (transforming JSON with multiple operations) and lack of annotations or output schema, the description is moderately complete. It covers the transformation process and key constraints but omits details on output structure, error cases, and performance implications. For a tool with no output schema and behavioral gaps, this leaves the agent under-informed, though the core functionality is adequately described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters (path, operations, maxBytes) thoroughly. The description adds marginal value by reiterating the maxBytes default and hinting at the path constraint ('within allowed directories'), but doesn't provide additional syntax, examples, or nuances beyond the schema. This meets the baseline for high schema coverage without enhancing parameter understanding significantly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Transform JSON data using a sequence of operations' with specific examples like mapping, grouping, sorting, flattening, and picking/omitting fields. It distinguishes from siblings like json_filter, json_query, and json_structure by emphasizing transformation rather than filtering, querying, or analyzing structure. However, it doesn't explicitly contrast with all siblings (e.g., json_get_value, json_sample), keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance: it mentions that the path must be within allowed directories, implying a constraint, but offers no explicit when-to-use advice. It doesn't differentiate when to choose this tool over alternatives like json_filter or json_query, nor does it mention prerequisites or exclusions. This lack of comparative context leaves the agent with little guidance on optimal tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rawr-ai/mcp-filesystem'

If you have feedback or need assistance with the MCP directory API, please join our Discord server