Skip to main content
Glama
ricleedo

JSON MCP Boilerplate

by ricleedo

json_read

Read and analyze JSON files to explore data structure, understand schema, and get overviews of large datasets for initial data exploration.

Instructions

Read and analyze JSON. Always use this tool to explore JSON structure, understand data schema, or get high-level overviews of large JSON. Use this for initial data exploration or when you need to understand the shape and types of data before extracting specific values.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYesPath to the JSON file
pathNoDot notation to specific location
max_depthNoLimit traversal depth
max_keysNoMaximum number of keys to show per object (default: show all keys)
sample_arraysNoShow only first N array items
keys_onlyNoReturn only the key structure
include_typesNoAdd type information
include_statsNoAdd file size and structure statistics

Implementation Reference

  • The handler function for the 'json_read' tool. It reads the JSON file, navigates to a specific path if provided, applies analysis options like structure preview or sampling, includes optional stats and types, truncates large outputs, and returns formatted markdown content.
    async ({ file_path, path, max_depth, max_keys, sample_arrays, keys_only, include_types, include_stats, }) => { try { const data = readJSONFile(file_path); const target = path ? getValueByPath(data, path) : data; let result: any; if (keys_only) { result = analyzeJSONStructure(target, max_depth || 3, 0, max_keys); } else if (sample_arrays !== undefined) { result = JSON.parse( JSON.stringify(target, (key, value) => { if (Array.isArray(value) && sample_arrays) { return value.slice(0, sample_arrays); } return value; }) ); } else { result = target; } // Build stats markdown section if requested let statsMarkdown = ""; if (include_stats) { const fileContent = readFileSync(resolve(file_path), "utf8"); const fileSize = (fileContent.length / 1024).toFixed(2); const nodeCount = JSON.stringify(data).length; statsMarkdown = "## File Statistics\n\n"; statsMarkdown += `- **File Size**: ${fileSize} KB\n`; statsMarkdown += `- **Total Nodes**: ${nodeCount.toLocaleString()}\n`; statsMarkdown += `- **Root Type**: ${ Array.isArray(data) ? "array" : typeof data }\n`; if (Array.isArray(target)) { statsMarkdown += `- **Array Length**: ${target.length}\n`; const elementTypes = [...new Set(target.map((item) => typeof item))]; statsMarkdown += `- **Element Types**: ${elementTypes.join(", ")}\n`; } else if (typeof target === "object" && target !== null) { const keys = Object.keys(target); statsMarkdown += `- **Key Count**: ${keys.length}\n`; if (keys.length > 0) { const topKeys = keys.slice(0, 10); statsMarkdown += `- **Top Keys**: ${topKeys.join(", ")}`; if (keys.length > 10) { statsMarkdown += ` (and ${keys.length - 10} more)`; } statsMarkdown += "\n"; } } statsMarkdown += "\n## Data\n\n"; } // Build type info markdown if requested let typeInfo = ""; if (include_types && !include_stats) { typeInfo = `**Type**: ${typeof target}`; if (Array.isArray(target)) { typeInfo = `**Type**: array (length: ${target.length})`; } typeInfo += "\n\n"; } const truncatedOutput = truncateForOutput(result); let outputText = JSON.stringify(truncatedOutput, null, 2); // Replace quoted truncation messages with unquoted text for markdown-like output outputText = outputText.replace( /"\.\.\.(\d+) more items"/g, "...$1 more items" ); outputText = outputText.replace( /"\.\.\.(\d+) more properties": "\.\.\.?"/g, "...$1 more properties" ); return { content: [ { type: "text", text: statsMarkdown + typeInfo + outputText }, ], }; } catch (error: any) { return { content: [{ type: "text", text: `Error: ${error.message}` }], }; } }
  • The input schema definition for the 'json_read' tool using Zod, defining parameters like file_path, path, max_depth, etc., with descriptions.
    file_path: z.string().describe("Path to the JSON file"), path: z.string().optional().describe("Dot notation to specific location"), max_depth: z.number().optional().describe("Limit traversal depth"), max_keys: z .number() .optional() .describe( "Maximum number of keys to show per object (default: show all keys)" ), sample_arrays: z .number() .optional() .describe("Show only first N array items"), keys_only: z.boolean().optional().describe("Return only the key structure"), include_types: z.boolean().optional().describe("Add type information"), include_stats: z .boolean() .optional() .describe("Add file size and structure statistics"), },
  • src/index.ts:171-295 (registration)
    The registration of the 'json_read' tool using server.tool(), including name, description, schema, and handler.
    server.tool( "json_read", "Read and analyze JSON. Always use this tool to explore JSON structure, understand data schema, or get high-level overviews of large JSON. Use this for initial data exploration or when you need to understand the shape and types of data before extracting specific values.", { file_path: z.string().describe("Path to the JSON file"), path: z.string().optional().describe("Dot notation to specific location"), max_depth: z.number().optional().describe("Limit traversal depth"), max_keys: z .number() .optional() .describe( "Maximum number of keys to show per object (default: show all keys)" ), sample_arrays: z .number() .optional() .describe("Show only first N array items"), keys_only: z.boolean().optional().describe("Return only the key structure"), include_types: z.boolean().optional().describe("Add type information"), include_stats: z .boolean() .optional() .describe("Add file size and structure statistics"), }, async ({ file_path, path, max_depth, max_keys, sample_arrays, keys_only, include_types, include_stats, }) => { try { const data = readJSONFile(file_path); const target = path ? getValueByPath(data, path) : data; let result: any; if (keys_only) { result = analyzeJSONStructure(target, max_depth || 3, 0, max_keys); } else if (sample_arrays !== undefined) { result = JSON.parse( JSON.stringify(target, (key, value) => { if (Array.isArray(value) && sample_arrays) { return value.slice(0, sample_arrays); } return value; }) ); } else { result = target; } // Build stats markdown section if requested let statsMarkdown = ""; if (include_stats) { const fileContent = readFileSync(resolve(file_path), "utf8"); const fileSize = (fileContent.length / 1024).toFixed(2); const nodeCount = JSON.stringify(data).length; statsMarkdown = "## File Statistics\n\n"; statsMarkdown += `- **File Size**: ${fileSize} KB\n`; statsMarkdown += `- **Total Nodes**: ${nodeCount.toLocaleString()}\n`; statsMarkdown += `- **Root Type**: ${ Array.isArray(data) ? "array" : typeof data }\n`; if (Array.isArray(target)) { statsMarkdown += `- **Array Length**: ${target.length}\n`; const elementTypes = [...new Set(target.map((item) => typeof item))]; statsMarkdown += `- **Element Types**: ${elementTypes.join(", ")}\n`; } else if (typeof target === "object" && target !== null) { const keys = Object.keys(target); statsMarkdown += `- **Key Count**: ${keys.length}\n`; if (keys.length > 0) { const topKeys = keys.slice(0, 10); statsMarkdown += `- **Top Keys**: ${topKeys.join(", ")}`; if (keys.length > 10) { statsMarkdown += ` (and ${keys.length - 10} more)`; } statsMarkdown += "\n"; } } statsMarkdown += "\n## Data\n\n"; } // Build type info markdown if requested let typeInfo = ""; if (include_types && !include_stats) { typeInfo = `**Type**: ${typeof target}`; if (Array.isArray(target)) { typeInfo = `**Type**: array (length: ${target.length})`; } typeInfo += "\n\n"; } const truncatedOutput = truncateForOutput(result); let outputText = JSON.stringify(truncatedOutput, null, 2); // Replace quoted truncation messages with unquoted text for markdown-like output outputText = outputText.replace( /"\.\.\.(\d+) more items"/g, "...$1 more items" ); outputText = outputText.replace( /"\.\.\.(\d+) more properties": "\.\.\.?"/g, "...$1 more properties" ); return { content: [ { type: "text", text: statsMarkdown + typeInfo + outputText }, ], }; } catch (error: any) { return { content: [{ type: "text", text: `Error: ${error.message}` }], }; } } );
  • Core helper function to read and parse a JSON file safely, used by the json_read handler.
    function readJSONFile(filePath: string): any { const absolutePath = resolve(filePath); if (!existsSync(absolutePath)) { throw new Error(`File not found: ${absolutePath}`); } const content = readFileSync(absolutePath, "utf8"); return safeParseJSON(content, absolutePath); }
  • Helper function to analyze and preview JSON structure up to max depth and keys, used when keys_only is true.
    function analyzeJSONStructure( obj: any, maxDepth: number = 3, currentDepth: number = 0, maxKeys?: number ): any { if (currentDepth > maxDepth) return "[...depth limit reached...]"; if (obj === null) return null; if (typeof obj !== "object") return typeof obj; if (Array.isArray(obj)) { if (obj.length === 0) return []; const sample = obj .slice(0, 3) .map((item) => analyzeJSONStructure(item, maxDepth, currentDepth + 1, maxKeys) ); return obj.length > 3 ? [...sample, `[...${obj.length - 3} more items]`] : sample; } const result: any = {}; const keys = Object.keys(obj); const keyLimit = maxKeys ?? keys.length; // Show all keys by default const sampleKeys = keys.slice(0, keyLimit); for (const key of sampleKeys) { result[key] = analyzeJSONStructure( obj[key], maxDepth, currentDepth + 1, maxKeys ); } if (keys.length > keyLimit) { result[`[...${keys.length - keyLimit} more keys]`] = "..."; } return result; }
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ricleedo/JSON-MCP-Boilerplate'

If you have feedback or need assistance with the MCP directory API, please join our Discord server