fetchsandbox-mcp
OfficialProvides curated workflows for issue lifecycle management (create, comment, close, reopen).
Allows importing the Linear OpenAPI spec and running workflows.
Allows importing the Notion OpenAPI spec and running workflows.
Allows importing the Paddle Billing OpenAPI spec and running workflows.
Provides curated workflows for accepting payments, including creating customers, PaymentIntents, and verifying webhooks.
Provides curated workflows for sending SMS and verifying messages with realistic Twilio-formatted responses.
fetchsandbox-mcp
Turn any OpenAPI spec into a working sandbox your AI agent can use, right from your IDE.
This is the Model Context Protocol (MCP) server for FetchSandbox. It exposes three tools that let any MCP-compatible agent ingest an OpenAPI spec, list its workflows, and run them — with realistic, schema-validated responses for every endpoint.
Why
Agents read raw OpenAPI specs and hallucinate. They guess field names, invent IDs that won't exist, and produce broken curl commands. FetchSandbox turns the spec into a stateful, AJV-validated sandbox so the agent can actually call the API and see real-shaped responses.
Plug it into your IDE once, and any time you ask your agent "let me try the Stripe API" or "show me the GitHub issue lifecycle," it can do that — for real, end-to-end.
Install — by agent
The MCP runs as a stdio process spawned by your IDE. There's nothing to install globally — npx runs the latest published version on demand. Pick your tool below, paste the snippet, restart.
Claude Desktop
File: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows)
{
"mcpServers": {
"fetchsandbox": {
"command": "npx",
"args": ["-y", "fetchsandbox-mcp"]
}
}
}Quit and reopen Claude Desktop (Cmd+Q, then reopen — not just close window).
Claude Code
User-level (all projects): ~/.claude/settings.json. Or project-level: .mcp.json in the repo root.
{
"mcpServers": {
"fetchsandbox": {
"command": "npx",
"args": ["-y", "fetchsandbox-mcp"]
}
}
}Restart the Claude Code session.
Cursor
File: ~/.cursor/mcp.json (global) or .cursor/mcp.json (project)
{
"mcpServers": {
"fetchsandbox": {
"command": "npx",
"args": ["-y", "fetchsandbox-mcp"]
}
}
}Restart Cursor.
Cline (VS Code extension)
Open the Cline panel → settings cog → MCP Servers → add a new server with:
Command:
npxArgs:
-y fetchsandbox-mcp
Reload the VS Code window.
Continue.dev
File: ~/.continue/config.yaml
mcpServers:
- name: fetchsandbox
command: npx
args:
- -y
- fetchsandbox-mcpRestart your IDE.
Codex CLI (OpenAI)
File: ~/.codex/config.toml
[mcp_servers.fetchsandbox]
command = "npx"
args = ["-y", "fetchsandbox-mcp"]Restart Codex.
Zed
File: ~/.config/zed/settings.json
{
"context_servers": {
"fetchsandbox": {
"command": {
"path": "npx",
"args": ["-y", "fetchsandbox-mcp"]
}
}
}
}GitHub Copilot
GitHub Copilot doesn't currently support the Model Context Protocol. Track github/copilot#feedback for updates. In the meantime, run any MCP-compatible chat (Claude Code, Cursor, Cline) alongside Copilot.
Anything else (Roo, Goose, etc.)
If your agent speaks MCP, it accepts a stdio command. Use:
Command:
npxArgs:
["-y", "fetchsandbox-mcp"]
Try it now
After restarting your agent, paste any of these prompts. Each hits a hand-curated workflow with realistic IDs and real state transitions.
Stripe — accept a payment
Use fetchsandbox to import the Stripe spec from
https://raw.githubusercontent.com/stripe/openapi/master/openapi/spec3.jsonand run theaccept_paymentworkflow. Show me the trace.
The agent imports 587 endpoints, matches the bundled curated Stripe sandbox, and runs a 6-step workflow: create customer (cus_…) → create PaymentIntent (pi_…, $49.99 USD, requires_payment_method) → confirm (requires_capture) → capture (succeeded) → retrieve → verify webhooks (payment_intent.created, payment_intent.succeeded).
Twilio — send an SMS
Use fetchsandbox to import the Twilio Messaging spec from
https://raw.githubusercontent.com/twilio/twilio-oai/main/spec/yaml/twilio_messaging_v1.yamland run thesend_smsworkflow.
The agent imports the messaging API and runs a curated send-and-verify flow with realistic Twilio-formatted message SIDs (SM…).
GitHub — issue lifecycle
Use fetchsandbox to import the GitHub REST API from
https://raw.githubusercontent.com/github/rest-api-description/main/descriptions/api.github.com/api.github.com.jsonand run theissue_lifecycleworkflow.
The agent walks the create → comment → close → reopen flow against a real-shaped GitHub sandbox.
Paddle — paste-content variant
If a vendor doesn't publish their spec at a stable URL (Paddle, Notion, Linear), paste the content directly:
Here's the Paddle Billing OpenAPI spec —
<paste JSON or YAML>. Use fetchsandbox to import it and run thesubscriptions_canceledworkflow.
Same engine path; same curated quality if the spec's info.title matches a bundled config.
Any other API
Use fetchsandbox to import
<your OpenAPI URL>— list the workflows and tell me which is most interesting.
For specs we don't have curated configs for, the engine auto-enumerates create + verify workflows for every detected resource. Honest about what it shows: UUIDs instead of vendor-style IDs, generic enum values instead of API-specific ones — but the request/response shape and template substitution between steps still work.
Tools
import_spec
Ingest an OpenAPI 3.x spec and get a sandbox you can call. Pass either a public URL or pasted content.
url: "https://raw.githubusercontent.com/stripe/openapi/master/openapi/spec3.json"
content: "<paste OpenAPI JSON or YAML here>"
name: "Optional friendly name"Returns spec_id, sandbox_id, base_url (proxy that serves real-shaped responses), workflows_preview (first 10), matched_bundled (true if we matched a curated config), and a dashboard_url to view everything in the browser.
list_workflows
List the named, runnable workflows the engine inferred or curated for an imported spec.
spec_id: "<id from import_spec>"run_workflow
Execute one workflow and return the step-by-step request/response trace. Template variables ({{step1.id}}) are resolved automatically between steps.
sandbox_id: "<id from import_spec>"
workflow_name: "<id or name from list_workflows>"Configuration
Env var | Default | Purpose |
|
| Override for stage testing or self-hosted backends. |
| (on) | Set to |
What we record
When telemetry is on, each tool call records: an opaque per-machine session id (random UUID stored at ~/.fetchsandbox/session.json), the tool name, latency, success/failure, and the spec URL or "pasted". We do not record spec content, request bodies, or credentials. We use this to count daily-active sessions and learn which APIs people are bringing to the platform.
To opt out:
export FETCHSANDBOX_TELEMETRY=0License
MIT — see LICENSE.
Links
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/fetchsandbox/mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server