Skip to main content
Glama

API Response Mocker

mock_api_response

Generate realistic mock API responses from JSON Schema specifications for testing and development purposes.

Instructions

Generate realistic mock API responses from a JSON Schema. Supports nested objects, arrays, string formats (email, uuid, date-time, url), field-name heuristics, enums, and min/max constraints. Set seed for reproducible output. Returns 1–100 records.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
schemaYesJSON Schema object describing the shape of the mock data
countNoNumber of mock records to generate (1–100)
seedNoOptional seed for reproducible output

Implementation Reference

  • Handler function for mock_api_response, which delegates the request to the Agent Toolbelt API via callToolApi.
      async ({ schema, count, seed }) => {
        const result = await callToolApi("api-response-mocker", { schema, count, seed });
        const data = result as any;
        const r = data.result;
    
        const lines = [
          `**Generated ${r.count} mock record${r.count !== 1 ? "s" : ""}** (schema type: ${r.schema.type})`,
          "",
          "```json",
          JSON.stringify(r.data, null, 2),
          "```",
        ];
    
        return { content: [{ type: "text" as const, text: lines.join("\n") }] };
      }
    );
  • Registration of the mock_api_response tool.
    server.registerTool(
      "mock_api_response",
      {
        title: "API Response Mocker",
        description:
          "Generate realistic mock API responses from a JSON Schema. " +
          "Supports nested objects, arrays, string formats (email, uuid, date-time, url), " +
          "field-name heuristics, enums, and min/max constraints. " +
          "Set seed for reproducible output. Returns 1–100 records.",
        inputSchema: {
          schema: z.record(z.unknown()).describe("JSON Schema object describing the shape of the mock data"),
          count: z.number().int().min(1).max(100).default(1).describe("Number of mock records to generate (1–100)"),
          seed: z.number().int().optional().describe("Optional seed for reproducible output"),
        },
      },
      async ({ schema, count, seed }) => {
        const result = await callToolApi("api-response-mocker", { schema, count, seed });
        const data = result as any;
        const r = data.result;
    
        const lines = [
          `**Generated ${r.count} mock record${r.count !== 1 ? "s" : ""}** (schema type: ${r.schema.type})`,
          "",
          "```json",
          JSON.stringify(r.data, null, 2),
          "```",
        ];
    
        return { content: [{ type: "text" as const, text: lines.join("\n") }] };
      }
    );
  • Input schema definition for mock_api_response.
    inputSchema: {
      schema: z.record(z.unknown()).describe("JSON Schema object describing the shape of the mock data"),
      count: z.number().int().min(1).max(100).default(1).describe("Number of mock records to generate (1–100)"),
      seed: z.number().int().optional().describe("Optional seed for reproducible output"),
    },
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and succeeds well: discloses reproducibility via seed, volume limits (1-100 records), and generation logic (field-name heuristics, format support). Missing only error handling or side effect details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense sentences efficiently pack capability details, feature enumeration, behavioral constraints, and output volume. Every clause earns its place; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 3-parameter tool with no output schema: mentions return volume (1-100 records) and core functionality. Could slightly improve by indicating return structure (array vs object), but feature list adequately implies output richness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (baseline 3). Description adds value by explaining seed's purpose ('reproducible output') and reinforcing count constraints ('Returns 1–100 records'), providing semantic context beyond the schema's bare descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Generate' + resource 'mock API responses' + input 'JSON Schema' clearly defines the tool's function. Distinguishes from sibling 'generate_schema' (which creates schemas) by consuming schemas to produce data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists capabilities (nested objects, string formats, constraints) that imply when to use it, but lacks explicit 'when to use vs alternatives' or prerequisites. No sibling seems to be a direct alternative, but the description doesn't explicitly guide the selection decision.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marras0914/agent-toolbelt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server