PTC-MCP enables programmatic tool calling for Claude Code via MCP, allowing batch execution of multiple tool calls in a single Python script without intermediate results consuming conversation tokens.
Discover available tools: Use
list_callable_toolsto get a JSON list of all namespaced tool names from connected MCP serversInspect tool schemas: Use
inspect_toolto retrieve detailed schema information including input requirements, description, and output structureExecute complex workflows: Run Python scripts with
execute_programwhere tools are available asmcp__<server>__<tool>()async functions, enabling loops, conditional logic, data filtering, and aggregationReduce token usage: Keep intermediate tool results within the Python runtime instead of adding them to the conversation context—only
print()statements are returned as outputImprove performance: Execute 3+ tool calls in a single script to reduce latency from multiple model round-trips
Bridge multiple MCP servers: Connect to downstream servers via stdio or SSE transports
Control access and limits: Apply tool allow/block lists to control which tools are available, and configure timeout and output size limits for execution safety
Provides a Python execution environment that allows for the programmatic orchestration of multiple MCP tools within a single script, enabling batch processing and complex logic without multiple model round-trips.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@PTC-MCPBatch compare the quarterly revenue trends for AMZN, MSFT, and GOOG."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Programmatic Tool Call MCP
Programmatic Tool Calling for Claude Code via MCP.
Claude Code on subscription plans lacks the Anthropic API's programmatic tool calling (PTC) feature, where Claude can write Python scripts that call multiple tools in a single execution. Without it, every tool invocation is a full model round-trip — intermediate results enter the context window, consuming tokens and adding latency.
PTC-MCP fixes this. It's an MCP server that exposes three tools:
list_callable_tools— Returns a JSON list of all available tool names. Use this to discover what's callable before writing a script.inspect_tool— Returns the schema and description of a specific tool, including itsoutputSchemaif the upstream server defines one.execute_program— Runs a Python script with MCP tools injected as async functions. Only stdout comes back. Intermediate tool results stay in the Python runtime and never enter the conversation.
How it works
At startup, PTC-MCP connects to your configured MCP servers as a client, discovers their tools, and makes them callable as mcp__<server>__<tool>() async functions inside scripts. Claude can call list_callable_tools to discover available tools, inspect_tool to understand a tool's schema, and then execute_program to run a script using those tools. Tool calls proxy to the real MCP servers, results stay local, and only print() output goes back.
Tools
list_callable_tools
Takes no arguments. Returns a JSON array of sorted namespaced tool names:
inspect_tool
Takes a tool_name string. Returns the tool's schema, description, and outputSchema (if available):
Note:
outputSchemais populated when the downstream MCP server defines one on its tools per the MCP tool output schema specification. Downstream servers that declare output schemas improve discoverability — Claude can understand return types before writing a script. Without one,inspect_toolreturnsnullforoutputSchemaand suggests inspecting return values at runtime instead.
execute_program
Takes a code string. Runs the Python script with all registered tools available as async functions. Returns stdout prefixed with a status line.
Example
Claude decides comparing three tickers benefits from batched execution:
Three tool calls happen inside the script. Claude sees only:
Setup
Requires Python 3.11+.
Configuration
Create a config.yaml (or set PTC_MCP_CONFIG to point elsewhere):
servers — MCP servers to bridge. Supports
stdioandssetransports.tools.allow / tools.block — Whitelist or blacklist namespaced tool names (mutually exclusive). Omit both to allow everything.
execution — Timeout and output size limits for
execute_program.
The server starts fine with no config file or an empty servers list.
Running
The server communicates over stdio (JSON-RPC). Add it to your Claude Code MCP settings to use it.
Testing
Tests include unit tests for config parsing, the execution engine, registry filtering/namespacing, and end-to-end integration tests that spin up a real mock MCP server.