Skip to main content
Glama
PaddleHQ

Paddle MCP Server

Official
by PaddleHQ

list_simulations

Read-only

Retrieve and manage simulation configurations in Paddle Billing. Filter by ID, status, or notification destination, with paginated results and sorting options.

Instructions

This tool will list simulations in Paddle.

These are the configurations for simulations, as opposed to the simulation runs which are used to send the events to the notification destination.

Use the maximum perPage by default (200) to ensure comprehensive results. Filter simulations by notificationSettingId, id, and status as needed. Results are paginated - use the 'after' parameter with the last ID from previous results to get the next page. Sort and order results using the orderBy parameter.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
afterNoReturn entities after the specified Paddle ID when working with paginated endpoints.
notificationSettingIdNoReturn entities related to the specified notification destination. Use a comma-separated list to specify multiple notification destination IDs.
orderByNoOrder returned entities by the specified field and direction.
perPageNoSet how many entities are returned per page. Returns the maximum number of results if a number greater than the maximum is requested.
idNoReturn only the IDs specified. Use a comma-separated list to get multiple entities.
statusNoReturn entities that match the specified status. Use a comma-separated list to specify multiple status values.

Implementation Reference

  • The handler function that executes the list_simulations tool. It calls paddle.simulations.list(params), fetches the first page with next(), adds pagination info, and returns the result or error.
    export const listSimulations = async (paddle: Paddle, params: z.infer<typeof Parameters.listSimulationsParameters>) => {
      try {
        const collection = paddle.simulations.list(params);
        const simulations = await collection.next();
        const pagination = paginationData(collection);
        return { pagination, simulations };
      } catch (error) {
        return error;
      }
    };
  • Defines the tool schema for MCP, specifying method name, description from prompts, Zod parameters schema, and required actions on simulations.
    method: "list_simulations",
    name: "List simulations",
    description: prompts.listSimulationsPrompt,
    parameters: params.listSimulationsParameters,
    actions: {
      simulations: {
        read: true,
        list: true,
      },
    },
  • src/api.ts:55-55 (registration)
    Registers the listSimulations handler in the toolMap dictionary, mapping the LIST_SIMULATIONS constant to the function for execution.
    [TOOL_METHODS.LIST_SIMULATIONS]: funcs.listSimulations,
  • src/constants.ts:47-47 (registration)
    Defines the constant TOOL_METHODS.LIST_SIMULATIONS = "list_simulations" used in tool definitions and registrations.
    LIST_SIMULATIONS: "list_simulations",
  • Helper function to extract pagination data from Paddle collections, used in listSimulations and other list handlers.
    const paginationData = (collection: PaginatedCollection) => ({
      hasMore: collection.hasMore,
      estimatedTotal: collection.estimatedTotal,
    });
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, but the description adds valuable behavioral context: it explains pagination mechanics ('after' parameter usage), recommends default perPage value (200), and clarifies that results are configurations rather than runs. This goes beyond what annotations alone provide without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose. Each sentence adds value: distinguishing simulations from runs, providing usage recommendations, and explaining pagination. While efficient, the recommendation about perPage could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the read-only nature (annotations), comprehensive parameter documentation (schema), and lack of output schema, the description provides good contextual completeness. It covers key behavioral aspects like pagination and filtering scope, though it doesn't describe the return format or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 6 parameters thoroughly. The description adds minimal parameter-specific semantics beyond the schema, mainly reinforcing filtering capabilities and pagination behavior. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'list simulations in Paddle' and distinguishes simulations from simulation runs, providing specific context about what type of resource is being listed. However, it doesn't explicitly differentiate from sibling list tools like 'list_simulation_runs' beyond the general distinction mentioned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning filtering parameters and pagination, but doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_simulation' or 'list_simulation_runs'. It offers some operational context but lacks clear when/when-not directives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PaddleHQ/paddle-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server