Skip to main content
Glama

json_dry_run

Analyze JSON file size breakdown by field using a shape object to determine granularity. Returns size in bytes for each specified field, helping optimize data usage and structure.

Instructions

Analyze the size breakdown of JSON data using a shape object to determine granularity. Returns size information in bytes for each specified field, mirroring the shape structure but with size values instead of data.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filePathYesPath to the JSON file (local) or HTTP/HTTPS URL to analyze
shapeNoShape object (formatted as valid JSON) defining what to analyze for size. Use 'true' to get total size of a field, or nested objects for detailed breakdown. Examples: 1. Get size of single field: {"name": true} 2. Get sizes of multiple fields: {"name": true, "email": true, "age": true} 3. Get detailed breakdown: {"user": {"name": true, "profile": {"bio": true}}} 4. Analyze arrays: {"posts": {"title": true, "content": true}} - gets total size of all matching elements 5. Complex analysis: { "metadata": true, "users": { "name": true, "settings": { "theme": true } }, "posts": { "title": true, "tags": true } } Note: - Returns size in bytes for each specified field - Output structure mirrors the shape but with size values - Array analysis returns total size of all matching elements - Use json_schema tool to understand the JSON structure first

Implementation Reference

  • Core handler function that executes the json_dry_run tool logic: ingests and parses JSON, computes size breakdown according to the shape, total and filtered sizes, and recommends chunk count.
    async function processJsonDryRun(input: JsonDryRunInput): Promise<JsonDryRunResult> {
      try {
        // Use strategy pattern to ingest JSON content
        const ingestionResult = await jsonIngestionContext.ingest(input.filePath);
        
        if (!ingestionResult.success) {
          // Map strategy errors to existing error format for backward compatibility
          return {
            success: false,
            error: ingestionResult.error
          };
        }
    
        // Parse JSON
        let parsedData: any;
        try {
          parsedData = JSON.parse(ingestionResult.content);
        } catch (error) {
          return {
            success: false,
            error: {
              type: 'invalid_json',
              message: 'Invalid JSON format in content',
              details: error
            }
          };
        }
    
        // Calculate size breakdown based on shape
        try {
          const sizeBreakdown = calculateSizeWithShape(parsedData, input.shape);
          
          // Calculate total size of the entire parsed data
          const totalSize = calculateValueSize(parsedData);
          
          // Calculate filtered data size
          const filteredData = extractWithShape(parsedData, input.shape);
          const filteredJson = JSON.stringify(filteredData, null, 2);
          const filteredSize = new TextEncoder().encode(filteredJson).length;
          
          // Calculate recommended chunks (400KB threshold)
          const CHUNK_THRESHOLD = 400 * 1024;
          const recommendedChunks = Math.ceil(filteredSize / CHUNK_THRESHOLD);
          
          return {
            success: true,
            sizeBreakdown,
            totalSize,
            filteredSize,
            recommendedChunks
          };
        } catch (error) {
          return {
            success: false,
            error: {
              type: 'validation_error',
              message: 'Failed to calculate size breakdown',
              details: error
            }
          };
        }
      } catch (error) {
        return {
          success: false,
          error: {
            type: 'validation_error',
            message: 'Unexpected error during processing',
            details: error
          }
        };
      }
    }
  • src/index.ts:683-784 (registration)
    MCP server tool registration for 'json_dry_run', including description, inline input schema, and wrapper handler that validates input, calls processJsonDryRun, and formats the response.
    server.tool(
        "json_dry_run",
        "Analyze the size breakdown of JSON data using a shape object to determine granularity. Returns size information in bytes for each specified field, mirroring the shape structure but with size values instead of data.",
        {
            filePath: z.string().describe("Path to the JSON file (local) or HTTP/HTTPS URL to analyze"),
            shape: z.unknown().describe(`Shape object (formatted as valid JSON) defining what to analyze for size. Use 'true' to get total size of a field, or nested objects for detailed breakdown.
    
    Examples:
    1. Get size of single field: {"name": true}
    2. Get sizes of multiple fields: {"name": true, "email": true, "age": true}
    3. Get detailed breakdown: {"user": {"name": true, "profile": {"bio": true}}}
    4. Analyze arrays: {"posts": {"title": true, "content": true}} - gets total size of all matching elements
    5. Complex analysis: {
       "metadata": true,
       "users": {
         "name": true,
         "settings": {
           "theme": true
         }
       },
       "posts": {
         "title": true,
         "tags": true
       }
    }
    
    Note: 
    - Returns size in bytes for each specified field
    - Output structure mirrors the shape but with size values
    - Array analysis returns total size of all matching elements
    - Use json_schema tool to understand the JSON structure first`)
        },
        async ({ filePath, shape }) => {
            try {
                // If shape is a string, parse it as JSON
                let parsedShape = shape;
                if (typeof shape === 'string') {
                    try {
                        parsedShape = JSON.parse(shape);
                    } catch (e) {
                        return {
                            content: [
                                {
                                    type: "text",
                                    text: `Error: Invalid JSON in shape parameter: ${e instanceof Error ? e.message : String(e)}`
                                }
                            ],
                            isError: true
                        };
                    }
                }
    
                const validatedInput = JsonDryRunInputSchema.parse({
                    filePath,
                    shape: parsedShape
                });
                
                const result = await processJsonDryRun(validatedInput);
                
                if (result.success) {
                    // Format the response with total size and breakdown
                    const formatSize = (bytes: number): string => {
                        if (bytes < 1024) return `${bytes} bytes`;
                        if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
                        return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
                    };
    
                    const header = `Total file size: ${formatSize(result.totalSize)} (${result.totalSize} bytes)\nFiltered size: ${formatSize(result.filteredSize)} (${result.filteredSize} bytes)\nRecommended chunks: ${result.recommendedChunks}\n\nSize breakdown:\n`;
                    const breakdown = JSON.stringify(result.sizeBreakdown, null, 2);
                    
                    return {
                        content: [
                            {
                                type: "text",
                                text: header + breakdown
                            }
                        ]
                    };
                } else {
                    return {
                        content: [
                            {
                                type: "text",
                                text: `Error: ${result.error.message}`
                            }
                        ],
                        isError: true
                    };
                }
            } catch (error) {
                return {
                    content: [
                        {
                            type: "text",
                            text: `Validation error: ${error instanceof Error ? error.message : String(error)}`
                        }
                    ],
                    isError: true
                };
            }
        }
    );
  • Input schema and type definition for json_dry_run tool using Zod.
    const JsonDryRunInputSchema = z.object({
      filePath: z.string().min(1, "File path or HTTP/HTTPS URL is required").refine(
        (val) => val.length > 0 && (val.startsWith('./') || val.startsWith('/') || val.startsWith('http://') || val.startsWith('https://') || !val.includes('/')),
        "Must be a valid file path or HTTP/HTTPS URL"
      ),
      shape: z.any().describe("Shape object defining what to analyze")
    });
    
    type JsonSchemaInput = z.infer<typeof JsonSchemaInputSchema>;
    type JsonFilterInput = z.infer<typeof JsonFilterInputSchema>;
    type JsonDryRunInput = z.infer<typeof JsonDryRunInputSchema>;
  • Output result type definition for json_dry_run tool.
    type JsonDryRunResult = {
      readonly success: true;
      readonly sizeBreakdown: any;
      readonly totalSize: number;
      readonly filteredSize: number;
      readonly recommendedChunks: number;
    } | {
      readonly success: false;
      readonly error: JsonSchemaError;
    };
  • Key helper function that recursively calculates the size breakdown of JSON data according to the provided shape definition, handling arrays and nested objects.
    function calculateSizeWithShape(data: any, shape: Shape): any {
      if (Array.isArray(data)) {
        // For arrays, apply the shape to each element and sum the results
        let totalSizeBreakdown: any = {};
        let isFirstElement = true;
        
        for (const item of data) {
          const itemSizeBreakdown = calculateSizeWithShape(item, shape);
          
          if (isFirstElement) {
            totalSizeBreakdown = itemSizeBreakdown;
            isFirstElement = false;
          } else {
            // Sum up the sizes
            totalSizeBreakdown = addSizeBreakdowns(totalSizeBreakdown, itemSizeBreakdown);
          }
        }
        
        return totalSizeBreakdown;
      }
    
      const result: any = {};
      for (const key in shape) {
        const rule = shape[key];
        if (data[key] === undefined) {
          // If the key doesn't exist in data, skip it or set to 0
          continue;
        }
        
        if (rule === true) {
          // Calculate total size of this key's value
          result[key] = calculateValueSize(data[key]);
        } else if (typeof rule === 'object') {
          // Recursively break down this key's value
          result[key] = calculateSizeWithShape(data[key], rule);
        }
      }
      return result;
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well at disclosing behavioral traits: it explains what the tool returns ('size information in bytes', 'mirroring the shape structure'), clarifies array handling ('returns total size of all matching elements'), and mentions the need to understand JSON structure first. It doesn't cover potential limitations like file size constraints or error conditions, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence clearly states the tool's purpose and output. The subsequent examples and notes are necessary for understanding the shape parameter's complex usage. While somewhat lengthy due to the examples, every sentence serves a clear purpose in explaining this analytical tool's behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's analytical complexity and lack of output schema, the description provides substantial context: it explains the return format ('mirroring the shape structure but with size values'), array handling, and relationship to sibling tools. The main gap is the absence of output schema documentation, but the description compensates well with clear behavioral explanations for this size analysis tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds some value by explaining the shape parameter's purpose ('defining what to analyze for size') and providing extensive examples, but doesn't add semantic meaning beyond what the schema's examples and descriptions already provide. This meets the baseline 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('analyze', 'determine', 'returns') and resources ('JSON data', 'size breakdown', 'bytes for each specified field'). It distinguishes from sibling tools by focusing on size analysis rather than filtering or schema extraction, with explicit mention of using json_schema first for structure understanding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it states 'Use json_schema tool to understand the JSON structure first', clearly positioning this as a follow-up analysis tool. The shape parameter examples illustrate specific use cases, and the tool's focus on size analysis differentiates it from json_filter (for filtering) and json_schema (for structure).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kehvinbehvin/json-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server