JIT Tool Synthesis
This server enables on-demand AI-powered tool generation with human approval gates and safe sandboxed execution — allowing you to create, manage, and run dynamically synthesized TypeScript tools using an LLM.
Synthesize tools (
synthesize_tool) — Describe a capability in natural language (with optional example input/output) and the LLM generates a working TypeScript tool, placed in a pending approval queueManage approval workflow — Use
list_pendingto review queued tools,approve_toolto activate them, orreject_toolto discard themExecute tools safely (
execute_tool) — Run approved tools in an isolated VM sandbox with blocked dangerous patternsBrowse and manage tools — Use
list_toolsto see available tools,get_toolfor full details (code, metadata), andremove_toolto delete permanentlyConfigure LLM at runtime — Use
get_configandset_configto view or switch providers (OpenAI, OpenRouter, Ollama, Groq, or any OpenAI-compatible API), models, and base URLs without restarting the server
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@JIT Tool SynthesisCreate a tool that fetches the latest top stories from Hacker News"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
JIT Tool Synthesis v3
LLM-powered on-demand tool generation with human-in-the-loop approval and safe execution.
Overview
This system generates TypeScript tools dynamically using an LLM, requires human approval before execution, and runs them in a sandboxed environment.
Architecture
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Synthesizer │────▶│ Approval │────▶│ Sandbox │
│ (LLM) │ │ (Human Gate) │ │ (Execution) │
└─────────────┘ └──────────────┘ └─────────────┘
│ │ │
▼ ▼ ▼
Generates TS Waits for Runs in
tool code human approval isolated envComponents
File | Purpose |
| Generates tool code using any OpenAI-compatible LLM |
| Human-in-the-loop gate — requires approval before execution |
| Safe execution environment for generated code |
| Tool persistence and storage |
| MCP server integration |
| Runtime configuration management |
Provider-Agnostic
This tool works with any OpenAI-compatible LLM API:
OpenRouter — 100+ models (Claude, GPT, Llama, etc.)
OpenAI — GPT-5.4, Codex 5.3
Ollama — Local models (Llama, Qwen, etc.)
LM Studio — Local models with GUI
Groq — Fast inference
Any other OpenAI-compatible API
Setup
# Install dependencies
npm install
# Copy environment template
cp .env.example .envConfigure Your LLM Provider
Edit .env with your provider details:
# Option 1: OpenRouter (default - 100+ models)
LLM_API_KEY=your-openrouter-key
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_MODEL=anthropic/claude-4-5-sonnet
# Option 2: OpenAI direct
LLM_API_KEY=sk-...
LLM_BASE_URL=https://api.openai.com/v1
LLM_MODEL=gpt-4o
# Option 3: Ollama (local)
LLM_BASE_URL=http://localhost:11434/v1
LLM_MODEL=llama3.1
# Option 4: Groq
LLM_API_KEY=gsk_...
LLM_BASE_URL=https://api.groq.com/openai/v1
LLM_MODEL=llama-3.1-70b-versatileUsage
Start the MCP Server
npm run build
node dist/server.jsRuntime Configuration
You can change the LLM provider without restarting:
# View current config
get_config
# Change model at runtime
set_config model=openai/gpt-4oMCP Tools
Tool | Description |
| Generate a new tool from natural language |
| Activate a pending tool |
| Discard a pending tool |
| Run an approved tool |
| List all approved tools |
| View tool details |
| Delete a tool |
| List tools waiting for approval |
| View LLM configuration |
| Change LLM provider/model at runtime |
Workflow
Request — User asks for a tool (e.g., "create a weather fetcher")
Synthesize — LLM generates tool code
Approve — Human reviews and approves the code
Execute — Tool runs in sandboxed environment
Store — Approved tools persist in registry
Environment Variables
Variable | Description | Default |
| API key for your provider | (required for cloud) |
| API endpoint | |
| Model to use | anthropic/claude-3-5-sonnet-20241022 |
Also supported (legacy): OPENROUTER_API_KEY, OPENAI_API_KEY, OPENAI_BASE_URL, SYNTHESIZER_MODEL
Security
Generated code runs in isolated VM sandbox
Blocked patterns prevent dangerous code (process, require, eval, etc.)
API keys not stored in config file
Status
Production Ready — Phase 1 complete.
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/EyeSeeThru/jit-tool-synthesis'
If you have feedback or need assistance with the MCP directory API, please join our Discord server