Skip to main content
Glama
ricleedo

JSON MCP Boilerplate

by ricleedo

json_read

Read and analyze JSON files to explore data structure, understand schema, and get overviews of large datasets for initial data exploration.

Instructions

Read and analyze JSON. Always use this tool to explore JSON structure, understand data schema, or get high-level overviews of large JSON. Use this for initial data exploration or when you need to understand the shape and types of data before extracting specific values.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYesPath to the JSON file
pathNoDot notation to specific location
max_depthNoLimit traversal depth
max_keysNoMaximum number of keys to show per object (default: show all keys)
sample_arraysNoShow only first N array items
keys_onlyNoReturn only the key structure
include_typesNoAdd type information
include_statsNoAdd file size and structure statistics

Implementation Reference

  • The handler function for the 'json_read' tool. It reads the JSON file, navigates to a specific path if provided, applies analysis options like structure preview or sampling, includes optional stats and types, truncates large outputs, and returns formatted markdown content.
    async ({
      file_path,
      path,
      max_depth,
      max_keys,
      sample_arrays,
      keys_only,
      include_types,
      include_stats,
    }) => {
      try {
        const data = readJSONFile(file_path);
        const target = path ? getValueByPath(data, path) : data;
    
        let result: any;
    
        if (keys_only) {
          result = analyzeJSONStructure(target, max_depth || 3, 0, max_keys);
        } else if (sample_arrays !== undefined) {
          result = JSON.parse(
            JSON.stringify(target, (key, value) => {
              if (Array.isArray(value) && sample_arrays) {
                return value.slice(0, sample_arrays);
              }
              return value;
            })
          );
        } else {
          result = target;
        }
    
        // Build stats markdown section if requested
        let statsMarkdown = "";
    
        if (include_stats) {
          const fileContent = readFileSync(resolve(file_path), "utf8");
          const fileSize = (fileContent.length / 1024).toFixed(2);
          const nodeCount = JSON.stringify(data).length;
    
          statsMarkdown = "## File Statistics\n\n";
          statsMarkdown += `- **File Size**: ${fileSize} KB\n`;
          statsMarkdown += `- **Total Nodes**: ${nodeCount.toLocaleString()}\n`;
          statsMarkdown += `- **Root Type**: ${
            Array.isArray(data) ? "array" : typeof data
          }\n`;
    
          if (Array.isArray(target)) {
            statsMarkdown += `- **Array Length**: ${target.length}\n`;
            const elementTypes = [...new Set(target.map((item) => typeof item))];
            statsMarkdown += `- **Element Types**: ${elementTypes.join(", ")}\n`;
          } else if (typeof target === "object" && target !== null) {
            const keys = Object.keys(target);
            statsMarkdown += `- **Key Count**: ${keys.length}\n`;
            if (keys.length > 0) {
              const topKeys = keys.slice(0, 10);
              statsMarkdown += `- **Top Keys**: ${topKeys.join(", ")}`;
              if (keys.length > 10) {
                statsMarkdown += ` (and ${keys.length - 10} more)`;
              }
              statsMarkdown += "\n";
            }
          }
    
          statsMarkdown += "\n## Data\n\n";
        }
    
        // Build type info markdown if requested
        let typeInfo = "";
        if (include_types && !include_stats) {
          typeInfo = `**Type**: ${typeof target}`;
          if (Array.isArray(target)) {
            typeInfo = `**Type**: array (length: ${target.length})`;
          }
          typeInfo += "\n\n";
        }
    
        const truncatedOutput = truncateForOutput(result);
        let outputText = JSON.stringify(truncatedOutput, null, 2);
    
        // Replace quoted truncation messages with unquoted text for markdown-like output
        outputText = outputText.replace(
          /"\.\.\.(\d+) more items"/g,
          "...$1 more items"
        );
        outputText = outputText.replace(
          /"\.\.\.(\d+) more properties": "\.\.\.?"/g,
          "...$1 more properties"
        );
    
        return {
          content: [
            { type: "text", text: statsMarkdown + typeInfo + outputText },
          ],
        };
      } catch (error: any) {
        return {
          content: [{ type: "text", text: `Error: ${error.message}` }],
        };
      }
    }
  • The input schema definition for the 'json_read' tool using Zod, defining parameters like file_path, path, max_depth, etc., with descriptions.
      file_path: z.string().describe("Path to the JSON file"),
      path: z.string().optional().describe("Dot notation to specific location"),
      max_depth: z.number().optional().describe("Limit traversal depth"),
      max_keys: z
        .number()
        .optional()
        .describe(
          "Maximum number of keys to show per object (default: show all keys)"
        ),
      sample_arrays: z
        .number()
        .optional()
        .describe("Show only first N array items"),
      keys_only: z.boolean().optional().describe("Return only the key structure"),
      include_types: z.boolean().optional().describe("Add type information"),
      include_stats: z
        .boolean()
        .optional()
        .describe("Add file size and structure statistics"),
    },
  • src/index.ts:171-295 (registration)
    The registration of the 'json_read' tool using server.tool(), including name, description, schema, and handler.
    server.tool(
      "json_read",
      "Read and analyze JSON. Always use this tool to explore JSON structure, understand data schema, or get high-level overviews of large JSON. Use this for initial data exploration or when you need to understand the shape and types of data before extracting specific values.",
      {
        file_path: z.string().describe("Path to the JSON file"),
        path: z.string().optional().describe("Dot notation to specific location"),
        max_depth: z.number().optional().describe("Limit traversal depth"),
        max_keys: z
          .number()
          .optional()
          .describe(
            "Maximum number of keys to show per object (default: show all keys)"
          ),
        sample_arrays: z
          .number()
          .optional()
          .describe("Show only first N array items"),
        keys_only: z.boolean().optional().describe("Return only the key structure"),
        include_types: z.boolean().optional().describe("Add type information"),
        include_stats: z
          .boolean()
          .optional()
          .describe("Add file size and structure statistics"),
      },
      async ({
        file_path,
        path,
        max_depth,
        max_keys,
        sample_arrays,
        keys_only,
        include_types,
        include_stats,
      }) => {
        try {
          const data = readJSONFile(file_path);
          const target = path ? getValueByPath(data, path) : data;
    
          let result: any;
    
          if (keys_only) {
            result = analyzeJSONStructure(target, max_depth || 3, 0, max_keys);
          } else if (sample_arrays !== undefined) {
            result = JSON.parse(
              JSON.stringify(target, (key, value) => {
                if (Array.isArray(value) && sample_arrays) {
                  return value.slice(0, sample_arrays);
                }
                return value;
              })
            );
          } else {
            result = target;
          }
    
          // Build stats markdown section if requested
          let statsMarkdown = "";
    
          if (include_stats) {
            const fileContent = readFileSync(resolve(file_path), "utf8");
            const fileSize = (fileContent.length / 1024).toFixed(2);
            const nodeCount = JSON.stringify(data).length;
    
            statsMarkdown = "## File Statistics\n\n";
            statsMarkdown += `- **File Size**: ${fileSize} KB\n`;
            statsMarkdown += `- **Total Nodes**: ${nodeCount.toLocaleString()}\n`;
            statsMarkdown += `- **Root Type**: ${
              Array.isArray(data) ? "array" : typeof data
            }\n`;
    
            if (Array.isArray(target)) {
              statsMarkdown += `- **Array Length**: ${target.length}\n`;
              const elementTypes = [...new Set(target.map((item) => typeof item))];
              statsMarkdown += `- **Element Types**: ${elementTypes.join(", ")}\n`;
            } else if (typeof target === "object" && target !== null) {
              const keys = Object.keys(target);
              statsMarkdown += `- **Key Count**: ${keys.length}\n`;
              if (keys.length > 0) {
                const topKeys = keys.slice(0, 10);
                statsMarkdown += `- **Top Keys**: ${topKeys.join(", ")}`;
                if (keys.length > 10) {
                  statsMarkdown += ` (and ${keys.length - 10} more)`;
                }
                statsMarkdown += "\n";
              }
            }
    
            statsMarkdown += "\n## Data\n\n";
          }
    
          // Build type info markdown if requested
          let typeInfo = "";
          if (include_types && !include_stats) {
            typeInfo = `**Type**: ${typeof target}`;
            if (Array.isArray(target)) {
              typeInfo = `**Type**: array (length: ${target.length})`;
            }
            typeInfo += "\n\n";
          }
    
          const truncatedOutput = truncateForOutput(result);
          let outputText = JSON.stringify(truncatedOutput, null, 2);
    
          // Replace quoted truncation messages with unquoted text for markdown-like output
          outputText = outputText.replace(
            /"\.\.\.(\d+) more items"/g,
            "...$1 more items"
          );
          outputText = outputText.replace(
            /"\.\.\.(\d+) more properties": "\.\.\.?"/g,
            "...$1 more properties"
          );
    
          return {
            content: [
              { type: "text", text: statsMarkdown + typeInfo + outputText },
            ],
          };
        } catch (error: any) {
          return {
            content: [{ type: "text", text: `Error: ${error.message}` }],
          };
        }
      }
    );
  • Core helper function to read and parse a JSON file safely, used by the json_read handler.
    function readJSONFile(filePath: string): any {
      const absolutePath = resolve(filePath);
      if (!existsSync(absolutePath)) {
        throw new Error(`File not found: ${absolutePath}`);
      }
    
      const content = readFileSync(absolutePath, "utf8");
      return safeParseJSON(content, absolutePath);
    }
  • Helper function to analyze and preview JSON structure up to max depth and keys, used when keys_only is true.
    function analyzeJSONStructure(
      obj: any,
      maxDepth: number = 3,
      currentDepth: number = 0,
      maxKeys?: number
    ): any {
      if (currentDepth > maxDepth) return "[...depth limit reached...]";
    
      if (obj === null) return null;
      if (typeof obj !== "object") return typeof obj;
    
      if (Array.isArray(obj)) {
        if (obj.length === 0) return [];
        const sample = obj
          .slice(0, 3)
          .map((item) =>
            analyzeJSONStructure(item, maxDepth, currentDepth + 1, maxKeys)
          );
        return obj.length > 3
          ? [...sample, `[...${obj.length - 3} more items]`]
          : sample;
      }
    
      const result: any = {};
      const keys = Object.keys(obj);
      const keyLimit = maxKeys ?? keys.length; // Show all keys by default
      const sampleKeys = keys.slice(0, keyLimit);
    
      for (const key of sampleKeys) {
        result[key] = analyzeJSONStructure(
          obj[key],
          maxDepth,
          currentDepth + 1,
          maxKeys
        );
      }
    
      if (keys.length > keyLimit) {
        result[`[...${keys.length - keyLimit} more keys]`] = "...";
      }
    
      return result;
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates that this is a read/analysis tool (not destructive), provides context about its exploratory nature, and hints at capabilities like handling large JSON files and providing overviews. However, it doesn't mention potential limitations like file size constraints or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with two sentences that each earn their place. The first sentence establishes the core purpose, while the second provides specific usage guidelines. There's zero wasted language and it's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (8 parameters, no output schema, no annotations), the description provides excellent guidance on when and why to use it. However, without annotations or output schema, it could benefit from more explicit information about what the tool returns (e.g., formatted analysis vs. raw data) and any behavioral constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('read and analyze JSON') and resources ('JSON structure', 'data schema', 'large JSON'). It distinguishes from the sibling tool json_extract by emphasizing exploration and understanding rather than extraction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Always use this tool to explore JSON structure', 'for initial data exploration', 'when you need to understand the shape and types of data before extracting specific values') and implies when not to use it (when you need to extract specific values, suggesting json_extract as an alternative).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ricleedo/JSON-MCP-Boilerplate'

If you have feedback or need assistance with the MCP directory API, please join our Discord server