kitsune-mcp
Kitsune MCP is a gateway server that dynamically discovers, mounts, and unmounts any of 10,000+ MCP servers on demand, minimizing token overhead (~1,187 tokens at rest) while providing full access to external tools when needed.
search(query, registry?, compare?)– Search across 7 registries (official, npm, PyPI, GitHub, Glama, MCPRegistry, Smithery) to discover MCP servers; usecompare=Truefor a side-by-side token cost table.shapeshift(server_id?, tools=[], server_args=[])– Mount a server's tools at runtime (optionally loading only a subset for lean context), or call with no arguments to unmount and return to the ~965-token baseline.call(tool_name, arguments?, server_id?)– Invoke any tool on the currently mounted (or a specified) server.auto(task, server_hint?, arguments?)– One-shot workflow: automatically searches, mounts, and calls the right server/tool for a given task.auth(server_id_or_var, value?)– Check or set credentials (env vars or OAuth 2.1), save API keys, trigger browser OAuth flows, or revoke tokens.status()– View runtime state: provider auth, active server, open connections, session stats, and context bloat detection.
Key benefits: 70–95% token savings vs. always-on MCP configs, improved tool-selection reliability by keeping visible tool count low, process isolation for local packages (npm, PyPI, Docker), and advanced developer tools (schema inspection, benchmarking, custom tool registration) when KITSUNE_TOOLS=all.
Provides web search capabilities through the Brave Search API, allowing AI agents to perform internet searches and retrieve information from the web.
Enables interaction with GitHub repositories and platform features through the GitHub MCP server, allowing AI agents to manage code, repositories, and development workflows.
Provides database and backend service integration through Supabase, allowing AI agents to interact with PostgreSQL databases, authentication, and other Supabase features.
Kitsune is a gateway MCP server that discovers, installs, and dynamically loads any of 130,000+ MCP servers at runtime (134,945 indexed on Glama alone as of May 2026). Instead of keeping every server's tools in context permanently, Kitsune mounts tools on demand via shapeshift() and releases them when done. Five tools at rest. Thousands available on request. No restarts.
Why not just shell out to a CLI?
Shelling out to aws, gcloud, kubectl, or gh from a Bash tool also costs ~0 tokens at rest. So why MCP at all?
Because long-tail CLI commands fail. LLMs have great recall on the top ~20 commands of a CLI they've seen in training (git status, gh pr list, aws s3 cp) and steeply degraded recall on everything else. For an API surface the size of aws (~9,000 subcommands), first-call success drops to 30–50% on long-tail operations — wrong flag names, singular-vs-plural verbs, case-sensitive enums, and silently-deprecated options. Each miss costs a retry turn, and the worst failures aren't errors but plausible-looking wrong calls that succeed.
MCP gives you structured tool schemas the model can read and validate against:
Approach | Long-tail accuracy | Failure mode | Token cost at rest |
CLI fallback | ~30–50% on rare subcommands | Hallucinated flags, silent wrong calls | ~0 |
Always-on MCP | ~95% across the whole surface | Schema bloat in every turn | ~10–15K per server |
Kitsune (mount on demand) | ~95% — only when you need it | None — schemas drop after | ~1,321 tokens |
That's the structural argument for the hub model: CLI-cheap at rest, MCP-accurate when it matters. For one-off ops on a CLI the model knows cold, gh is fine. For unfamiliar APIs, internal tooling, or any operation where a wrong call has real cost (production AWS changes, billing operations, security flows), Kitsune gives you schema-validated execution without the always-on tax. See examples/scenarios/ for seven worked use cases.
Token savings vs always-on
The savings grow with every server you add — because Kitsune's resting cost stays flat at ~1,321 tokens (measured: 6 lean-profile tools) no matter how many servers live behind it:
Saving formula: 1 − (Kitsune base 965 + surgical mount) / always-on total
Always-on servers | Always-on/turn | Kitsune per active call | Saved |
GitHub (26 tools) | 4,229 | ~1,265 (965 + ~300) | 70% |
GitHub + filesystem + git | 8,678 | ~1,265–1,655 | 81–85% |
Notion + GitHub + filesystem + git + memory | 25,000 | ~1,265–2,915 | 88–95% |
Savings grow because Kitsune's ~965-token baseline is shared across all registered servers — you only pay it once regardless of how many are behind it.
Fewer tools in context also means more reliable answers. Research consistently shows LLM tool-selection degrades as the visible tool count grows — Kitsune keeps the model focused on exactly what the current task needs.
Contents
Installation
pip install kitsune-mcp # recommended
# or
uvx kitsune-mcp # isolated env via uv, no venv setup
# or
npx kitsune-mcp # npm (delegates to uvx internally)Requirements: Python 3.12+ · node/npx for npm-based servers · uvx from uv for PyPI-based servers
Add to your MCP client config — once, globally:
{
"mcpServers": {
"kitsune": { "command": "kitsune-mcp" }
}
}Compatible with Claude Desktop, Claude Code, Cursor, Cline, OpenClaw, Continue.dev, Zed, and any MCP-compatible client.
Client | Config file |
Claude Desktop (macOS) |
|
Claude Desktop (Windows) |
|
Claude Code |
|
Cursor / Windsurf |
|
Cline / Continue.dev | VS Code settings / |
Quick start
# Find a server
search("web scraping")
# Mount specific tools, use them, release
shapeshift("notion-hosted", tools=["notion-search"])
call("notion-search", arguments={"query": "roadmap"})
shapeshift() # context returns to ~1,321 tokens
# One-shot via auto() — use server_hint for reliable routing
auto("current time in Tokyo", server_hint="mcp-server-time")
# Store a credential, then mount
auth("BRAVE_API_KEY", "sk-...")
shapeshift("brave", tools=["brave_web_search"])
call("brave_web_search", arguments={"query": "MCP protocol 2025"})
shapeshift()How it works
Kitsune is a dynamic MCP proxy. shapeshift(server_id) connects to a target server via the appropriate transport (stdio subprocess, HTTP, WebSocket), fetches its tools/list, and registers each tool as a native FastMCP tool with the exact schema from the server. The AI client receives a notifications/tools/list_changed event and sees the new tools as first-class — no wrapper, no indirection.
shapeshift() with no args reverses all of it: deregisters the proxy closures, closes the connection, and notifies the client. Context returns to the ~965-token baseline.
Tool-schema RAG
Document RAG | Kitsune |
Index all documents | Registry: 130,000+ servers across 7 sources |
Query → retrieve relevant chunks |
|
Inject only relevant content |
|
Model reasons over those chunks | Agent calls those tools natively |
Evict when done |
|
Transport selection
Server source | Transport |
npm package |
|
PyPI package |
|
GitHub repo |
|
Smithery hosted | HTTP + SSE (requires |
WebSocket |
|
Docker |
|
Tool reference
Tool | Signature | Description |
| — | Provider auth state, GATEWAY bloat detection, session performance stats |
|
| Search for servers across 7 registries; |
|
| Mount a server's tools (with ID) or unmount current form (no args). |
|
| Invoke a tool; |
|
| Check or set env vars; trigger OAuth 2.1 browser flow for hosted servers |
|
| One-shot: search → mount → call → return result |
Context overhead at rest: ~1,321 tokens for all 6 lean-profile tools (measured via examples/benchmark.py).
auto()note:auto(task, server_hint="server-id")gives reliable results. Withoutserver_hint, routing is best-effort via semantic search and can misfire on ambiguous queries — usesearch()first when unsure.
Server sources
Kitsune searches 7 registries in parallel. No single registry is required.
Registry | Auth required |
|
None |
| |
None |
| |
None |
| |
npm | None |
|
PyPI | None |
|
GitHub repos | None |
|
Free API key |
|
search() fans out across all no-auth registries by default. Add a SMITHERY_API_KEY to include Smithery's hosted catalog (HTTP servers, no local install required).
GATEWAY: context bloat detection
status() scans your active MCP client configs and reports what other servers are running on every turn:
GATEWAY
⚠ 1 other server(s) active in claude-desktop (~8 extra tools in context)
Run setup() to harvest their credentials and reduce bloat
⚠ 1 other server(s) active in claude-code (~8 extra tools in context)To consolidate:
setup() # preview — shows what can be harvested
setup(action="harvest") # extract API keys → ~/.kitsune/.env (non-destructive)
setup(action="absorb") # register those servers for shapeshift()
setup(project=True) # write .claude/mcp.json with only Kitsune (this project)Kitsune never modifies existing configs without explicit confirmation.
Performance
Token overhead: surgical mount vs full mount
Full-mount figures measured live against v0.20.1 via shapeshift() probes. Surgical estimates (~) are proportional approximations based on tool count, not individually measured. To measure Kitsune's own profile size: python examples/benchmark.py.
Saved = 1 − (500 base + surgical) / always-on. Surgical estimates (~) are proportional; full-mount figures are measured.
Server | Tools | Always-on | Surgical example | 500 + surgical | Saved |
| 2 | 261 | (all tools) | ~761 | — ¹ |
| 12 | 1,242 | status / diff / log | ~810 | 35% |
| 9 | 2,615 | read_graph / search_nodes | ~1,080 | 59% |
| 14 | 3,207 | read / write / edit | ~1,190 | 63% |
| 8 | 3,612 | brave_web_search | ~950 | 74% |
| 26 | 4,229 | search_repositories | ~800 | 81% |
| 14 | 13,707 | search / fetch | ~2,450 | 82% |
¹ mcp-server-time's full schema (261 tokens) is smaller than Kitsune's base. Kitsune pays off for small servers only when multiple servers share the baseline.
Multi-server compounding
Kitsune's resting cost (~1,321 tokens) is constant regardless of how many servers are registered. Always-on cost grows linearly with each server added.
All figures use servers with measured full-mount costs. Kitsune cost = 965 base + surgical mount for whichever server is active. The range reflects the cheapest (git ~310) to most expensive (Notion ~1,950) surgical call.
Servers always-on | Always-on/turn | Kitsune per active call | Saved |
GitHub only | 4,229 | ~1,265 | 70% |
GitHub + filesystem + git | 8,678 | ~1,265–1,655 | 81–85% |
Notion + GitHub + filesystem + git + memory | 25,000 | ~1,265–2,915 | 88–95% |
Tool-selection reliability
LLM tool-selection degrades as the visible tool count grows — a finding consistent across multiple tool-use benchmarks (Gorilla, ToolBench). The failure mode is typically adjacent-name confusion: a model that sees read_file, read_text_file, and read_media_file simultaneously is more likely to call the wrong one than a model that sees only the one it needs.
Kitsune holds 6 tools at rest; 7–9 during active use. A Kitsune-specific benchmark measuring selection accuracy across tool-count conditions does not yet exist — contributions welcome.
Connection latency
Kitsune maintains a persistent process pool — re-attaching to a running server within a session takes 0 ms.
Transport | Cold start | Warm (pooled) |
HTTP / Smithery hosted | 0–1.4 s | 0.0 s |
Local stdio via | 1.7–6.3 s | 0.0 s |
Local stdio via | 1.0–5.2 s | 0.0 s |
Configuration
Env vars and .env files
Kitsune re-reads credentials on every shapeshift() and call(). Add or update a key mid-session — no restart needed.
Search order: CWD/.env → ~/.env → ~/.kitsune/.env (last wins).
# Write a key and activate immediately
auth("BRAVE_API_KEY", "sk-...") # writes to ~/.kitsune/.env
# Or manage .env directly
echo "BRAVE_API_KEY=sk-..." >> ~/.kitsune/.envCustom tool surface
Expose only specific tools via KITSUNE_TOOLS:
{
"mcpServers": {
"kitsune": {
"command": "kitsune-mcp",
"env": { "KITSUNE_TOOLS": "shapeshift,call,auth" }
}
}
}Smithery
{ "env": { "SMITHERY_API_KEY": "your-key" } }Get a free key at smithery.ai/account/api-keys. Without it, Kitsune is fully functional via npm, PyPI, official registry, and GitHub.
Agent profiles
Research agent — web search + fetch + memory
shapeshift("brave", tools=["brave_web_search"]) # ~450 tokens
shapeshift("mcp-server-fetch") # ~289 tokens
shapeshift("@modelcontextprotocol/server-memory",
tools=["read_graph", "search_nodes"]) # ~580 tokens
# Peak: ~1,300 tokens vs 6,516 always-on → 80% reductionCode agent — filesystem + git
shapeshift("@modelcontextprotocol/server-filesystem",
tools=["read_file", "write_file", "edit_file"],
server_args=["/path/to/project"]) # ~690 tokens
shapeshift("mcp-server-git",
tools=["git_status", "git_diff", "git_log"]) # ~310 tokens
# Peak: ~1,000 tokens vs 4,449 always-on → 78% reductionNotes / PM agent — Notion + memory
shapeshift("notion-hosted",
tools=["notion-search", "notion-append-block-children"]) # ~1,950 tokens
shapeshift("@modelcontextprotocol/server-memory",
tools=["add_memory", "search_nodes"]) # ~580 tokens
# Peak: ~2,500 tokens vs 16,322 always-on → 85% reductionSecurity
Trust tiers
Tier | Sources | Label |
High |
|
|
Medium |
|
|
Community |
|
|
Community servers require confirm=True on shapeshift() — an explicit acknowledgement before running arbitrary code. Set KITSUNE_TRUST=community (via auth("KITSUNE_TRUST", "community") or .env) to skip the gate globally for servers you already trust.
Credential handling
Credentials stored at
~/.kitsune/.envand~/.kitsune/oauth/with mode0600OAuth 2.1 with PKCE S256 and Dynamic Client Registration (RFC 7591) for hosted servers
shapeshift()warns on missing credentials before any tool callauth("server-id", "logout")clears cached OAuth tokens
Process isolation
stdio servers run as isolated OS subprocesses — no shared memory with Kitsune
Docker servers run with
--rm -i --memory 512mfetch()blocks private IPs, loopback, and non-HTTPS URLsProcess pool capped at 10 concurrent servers; idle processes evicted after 1 hour
Install commands validated against shell metacharacter and path-traversal patterns before execution
For MCP developers
The full evaluation suite is available by setting KITSUNE_TOOLS=all:
{ "command": "kitsune-mcp", "env": { "KITSUNE_TOOLS": "all" } }Additional tools:
Tool | What it does |
| Schema review + live credential check (✓/✗ per key) + measured token cost |
| Quality score 0–100 across connectivity, schema correctness, and tool behaviour |
| Latency benchmark — p50, p95, min, max |
| Side-by-side: token cost, tool count, trust tier, credential status |
| Register a custom HTTP-backed tool; |
Test your server inside real Claude or Cursor sessions — not in an isolated inspector UI.
Why Kitsune?
In Japanese folklore, the Kitsune (狐) is a fox spirit of extraordinary intelligence and magical power. What makes it remarkable is not what it is, but what it can become. With age and wisdom, a Kitsune grows new tails — each one a new ability mastered, a new form borrowed from the world around it. It can shapeshift into anything: a scholar, a warrior, a force of nature. And when the purpose is fulfilled, it casts off that form as easily as it took it on, returning to its true self — ready to become something else entirely.
One fox. Infinite forms. Every power available. Nothing carried that isn't needed.
shapeshift("brave-search") — the fox takes on a new form. Its tools appear as if they were always there.
shapeshift() — it returns to its true shape. Context drops back to baseline. Ready for the next form.
Each server mounted is a new tail. Each capability borrowed cleanly and released when done. One entry in your config. Every server in the MCP ecosystem, on demand — summoned, used, and let go.
I am not Japanese, and I use this name with the highest respect for the mythology and culture it comes from. The parallel felt too precise to ignore — a spirit that shapeshifts between forms, gains new powers, and releases them at will. That is exactly what this tool does.
Contributing
make dev # install with dev dependencies
make test # pytest
make lint # ruffIssues and PRs: github.com/kaiser-data/kitsune-mcp
See CHANGELOG.md for version history.
MIT License · Python 3.12+ · Built on FastMCP
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/kaiser-data/kitsune-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server