Skip to main content
Glama
PaddleHQ

Paddle MCP Server

Official
by PaddleHQ

get_simulation_run

Read-only

Retrieve a simulation run from Paddle by its ID, optionally including related events to analyze the simulation's outcomes.

Instructions

This tool will retrieve a simulation run from Paddle by its ID.

Use the include parameter to include related entities in the response:

  • events: An array of events entities for events sent by this simulation run.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
simulationIdYesPaddle ID of the simulation entity associated with the run.
simulationRunIdYesPaddle ID of the simulation run entity.
includeYesInclude related entities in the response.

Implementation Reference

  • The core handler function that implements the get_simulation_run tool by calling the Paddle SDK's simulationRuns.get method with the provided simulationId, simulationRunId, and optional query parameters.
    export const getSimulationRun = async (
      paddle: Paddle,
      params: z.infer<typeof Parameters.getSimulationRunParameters>,
    ) => {
      try {
        const { simulationId, simulationRunId, ...queryParams } = params;
        const hasQueryParams = Object.keys(queryParams).length > 0;
        const simulationRun = await paddle.simulationRuns.get(
          simulationId,
          simulationRunId,
          hasQueryParams ? queryParams : undefined,
        );
        return simulationRun;
      } catch (error) {
        return error;
      }
    };
  • Defines the MCP tool schema including the Zod parameters schema (params.getSimulationRunParameters), description prompt, name, and required permissions/actions for the get_simulation_run tool.
      method: "get_simulation_run",
      name: "Get a run for a simulation",
      description: prompts.getSimulationRunPrompt,
      parameters: params.getSimulationRunParameters,
      actions: {
        simulationRuns: {
          read: true,
          get: true,
        },
      },
    },
  • src/api.ts:61-61 (registration)
    Registers the getSimulationRun handler in the toolMap object, mapping the constant TOOL_METHODS.GET_SIMULATION_RUN to the function for execution in PaddleAPI.run.
    [TOOL_METHODS.GET_SIMULATION_RUN]: funcs.getSimulationRun,
  • src/constants.ts:53-53 (registration)
    Defines the constant TOOL_METHODS.GET_SIMULATION_RUN used for tool identification in registration and tool definitions.
    GET_SIMULATION_RUN: "get_simulation_run",
  • The dynamic registration loop in PaddleMCPServer that registers all filtered tools (including get_simulation_run) with the MCP server using the tool definitions from tools.ts and delegates execution to PaddleAPI.
      tool.method,
      tool.description,
      tool.parameters.shape,
      annotations,
      async (arg: unknown, _extra: unknown) => {
        const result = await this._paddle.run(tool.method, arg);
        return {
          content: [
            {
              type: "text" as const,
              text: String(result),
            },
          ],
        };
      },
    );
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful context about the 'include' parameter behavior (specifically that 'events' returns an array of event entities), which goes beyond what annotations provide. However, it doesn't describe other behavioral traits like error handling, rate limits, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: one stating the purpose and one explaining the 'include' parameter. It's front-loaded with the core purpose. While efficient, the second sentence could be slightly more structured (e.g., using bullet points for clarity), but overall it avoids unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 required parameters, no output schema), the description is adequate but has gaps. It covers the basic purpose and parameter usage but lacks information about return values, error cases, or how this fits into broader workflows with sibling tools. With annotations covering safety, it meets minimum viability but could be more comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds minimal value by briefly explaining the 'include' parameter's effect ('include related entities in the response') and listing the 'events' option, but this mostly repeats schema information. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'retrieve a simulation run from Paddle by its ID.' This is a specific verb ('retrieve') + resource ('simulation run') combination. However, it doesn't explicitly differentiate from sibling tools like 'get_simulation' or 'list_simulation_runs' beyond the obvious ID-based retrieval vs listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some implied usage guidance by mentioning the 'include' parameter for related entities, but it doesn't explicitly state when to use this tool versus alternatives like 'get_simulation' or 'list_simulation_runs.' There's no mention of prerequisites, error conditions, or specific contexts where this tool is preferred over others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PaddleHQ/paddle-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server