Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Dynamic MCP Serverreview this CI/CD pipeline configuration for best practices"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Dynamic MCP Server
A versatile server for the Model Context Protocol (MCP) that dynamically configures tools from a JSON file.
How it Works
The dynamic-mcp-server is a single, executable that can power multiple, distinct MCP servers. The behavior of each server is defined by a JSON configuration file that is loaded at startup. This means you can create many different MCP servers with different tools and prompts, all from the same running instance.
The following diagram illustrates this architecture:
⚠️ WARNING: Dangerous Run Modes ⚠️
By default, this server is configured to use certain "dangerous" flags when interacting with claude, codex, and gemini CLIs (e.g., --dangerously-skip-permissions, --dangerously-bypass-approvals-and-sandbox, -y). These flags are intended for development and testing purposes and should be used with extreme caution in production environments, as they can bypass important security and safety mechanisms. Review src/main.js to understand these flags and modify them if your use case requires stricter security.
Use Cases
The dynamic-mcp-server can be used to create a variety of powerful tools that integrate with AI models. Here are a few examples:
Code Review Agent: Create a tool that reviews your code for style, errors, and best practices. You can configure it to use a specific model and prompt to match your team's coding standards.
Documentation Assistant: Build a tool that can answer questions about your codebase, generate documentation, or provide examples of how to use a specific function.
Custom Workflows: Implement complex workflows that involve multiple AI models. For example, you could create a workflow that first uses a code generation model to write a function, and then uses a code review model to check the generated code.
CLI Front-end: The
dynamic-mcp-serverallows you to create a CLI front-end for models like Gemini, Claude, and Codex. This is useful for users who prefer to interact with these models from the command line, but still want to leverage the power of MCP.Model-Specific Prompts: Instead of relying on a generic prompt, the
dynamic-mcp-serverallows you to create model-specific prompts that are tailored to the strengths of each model. This can lead to better results and a more efficient workflow.
CLI Versions
This project is based on, and tested with, the following CLI versions:
Claude Code: 1.0.128
Codex CLI: 0.73.0
Gemini CLI: 0.21.0
Creating a Dynamic MCP Server
To create a new dynamic-mcp-server, you need to define a JSON configuration file. This file specifies the model to use, the tools to expose, and the prompts for each tool.
Configuration File Structure
The server is configured using a JSON file. This file can be located anywhere on your file system.
Config Options
Field | Required | Description |
| No | Server name (defaults to config filename) |
| Yes | CLI to use: |
| No | Specific model ID to pass to the CLI (e.g., |
| No | Optional logging configuration (see Logging section) |
| Yes | Array of tool definitions |
Tool Definition
Field | Required | Description |
| Yes | Tool name (no spaces or dots) |
| Yes | What the tool does |
| No | Prompt template with |
| No | Path to a file containing the prompt template (takes precedence over |
| No | Run this tool asynchronously. Overrides the server-level |
| No | Optional per-tool logging overrides (same fields as server logging; see Logging section) |
| No | Array of input parameters |
Per-tool logging overrides are applied on top of server-level logging. CLI flags still take precedence over everything.
| command / args | No | Optional; currently not executed by the server. The model prompt drives the CLI call. Extend src/main.js if you want per-tool shell commands. |
How Tool Arguments Become the Final Prompt
When a tool is invoked, the server builds a single prompt string that is passed to the model CLI:
If
promptorpromptFileis provided, the template is used and{{variable}}placeholders are replaced with the incoming tool arguments.If no prompt template is provided, the server falls back to:
the first
stringinput value (if any), otherwiseJSON.stringify(toolParams)for all inputs.
If the CLI
--promptflag is used, that prefix is prepended to the task with a newline separator.
Input Definition
Field | Required | Description |
| Yes | Parameter name |
| Yes | Type: |
| Yes | Parameter description |
| No | Whether required (defaults to |
See the examples/ folder for sample configurations.
Logging
Logging is enabled by default at info level and writes to stderr. Configuration precedence is: CLI flags > environment variables > config file > defaults.
When format is json, each log entry includes serverName so you can distinguish logs from multiple MCP server instances.
Example config:
Logging fields:
Field | Description |
| Enable/disable logging (default: true) |
|
|
|
|
|
|
|
|
| Include full request/response payloads (default: false) |
| Optional max chars for payload logs |
Environment variables:
DYNAMIC_MCP_LOG_ENABLEDDYNAMIC_MCP_LOG_LEVELDYNAMIC_MCP_LOG_FORMATDYNAMIC_MCP_LOG_DESTINATIONDYNAMIC_MCP_LOG_CATEGORIESDYNAMIC_MCP_LOG_PAYLOADS_ENABLEDDYNAMIC_MCP_LOG_PAYLOAD_MAX_CHARS
Using a Dynamic MCP Server
Once you have created a configuration file for your dynamic-mcp-server, you need to configure your model's CLI to use it.
Installation
First, install the dynamic-mcp-server globally:
This makes the dynamic-mcp-server command available on your PATH. In your MCP client configs (Claude, Codex, Gemini), set the server command to dynamic-mcp-server and pass your JSON config path with the --config flag (plus any flags like --async or --prompt).
For contribution and release process details, see CONTRIBUTING.md.
MCP Client Configuration
MCP Clients are your Codex, Claude, Gemini CLIs. These settings tell your CLI what is available and how to use the dynamic-mcp-servers you created.
CLI Options
Option | Description |
| Path to the JSON configuration file (required) |
| A prompt string or path to a prompt file. If provided, this prompt is prepended to every task with a newline separator. If the value is a valid file path, its contents are used. |
| Run tools asynchronously by default |
| Print handshake JSON and exit |
| Logging level ( |
| Logging format ( |
|
|
| Comma-separated list or |
| Enable full request/response payload logging |
| Truncate payload logs to max char count |
| Disable logging |
Prompt Prefix
The --prompt option allows you to prepend a system prompt to every task. This is useful for:
Setting consistent behavior across all tools
Adding project-specific context or guidelines
Defining output format requirements
Note: Prompt can be used at the MCP Server configuration level, thus applying to all tools in that server, or at the Tool Definition level, giving the tool a specific prompt.
Using a prompt file:
Using a literal string:
Example prompt file (examples/code-review-prompt.txt):
Here is a very simple prompt. Not recommended to use, as it hasn't been vetted. Here for example only.
When used with the code-review.json config, every code review task will have this prompt prepended to it.
Validation & Smoke Tests
Two layers of checks help catch protocol or CLI regressions early:
Quick handshake smoke:
dynamic-mcp-server --config /path/to/config.json --handshake-and-exitExits after printing handshake JSON to stdout; useful to confirm wiring before running clients.
Protocol-only (no external CLIs):
npm run verify:protocolSpawns
src/main.js --handshake-and-exitwith__tests__/test-config.jsonand validates the handshake JSON.Fails if output contains “error” or is not JSON.
CLI-specific smoke (per model):
npm run verify:clientsVerifies each CLI (
claude,codex,gemini) is installed and at least the versions listed in “CLI Versions”.Performs a handshake-only smoke test per model; captures stdout/stderr to temp files and deletes them automatically.
Missing/unauthenticated CLIs are skipped with a clear message.
Full CLI exercise:
npm run verify:clients:full(setsEXERCISE_CLI=1)Additionally runs a simple tool call via each CLI using
executeTask.Asserts expected text, checks that neither stdout nor stderr contains “error”, and cleans temp files via traps.
Gemini’s exercise step is skipped by default because its CLI can auto-invoke tools and emit quota errors; set
SKIP_GEMINI_EXERCISE=0to force it.Note: the maintainer rarely uses the Gemini CLI, so Gemini support may lag behind Claude/Codex; run the forced exercise if you rely on Gemini and open issues if it breaks.
Known limitations:
Gemini CLI can auto-invoke tools and return quota errors; its exercise test is off by default.
Default CLI flags are “dangerous” and should be tightened for production.
Async jobs are in-memory only; they don’t survive process restarts.
Prerequisites
Node.js environment.
Model CLIs installed and authenticated:
claude,codex,gemini.CLI flags assumed by this project:
Claude:
--dangerously-skip-permissionsCodex:
--dangerously-bypass-approvals-and-sandbox --search exec --skip-git-repo-checkGemini:
-y -p
If newer CLI versions change these flags, the CLI smoke tests will fail fast and print the detected version so you can update
src/main.js/CLI_CONFIG.
Artifacts & Cleanup
Temp files created via
mktemp; removed onEXITtraps.On failure, the script prints temp file paths so you can inspect them; on success,
/tmpis left clean.Set
DEBUG_KEEP=1to retain temp files for debugging.
Async Mode
Some MCP clients have tool execution timeouts. For long-running tasks, you can enable async mode using the --async flag when starting the server. This sets the default for all tools, but each tool can override the behavior with async: true or async: false in the config.
Starting with async mode:
When async is enabled (either via --async or a tool’s async: true setting), the server will start the task in the background and return a jobId immediately. Use the built-in check-job-status tool to poll until the job is completed or failed. The check-job-status tool is registered whenever at least one async tool exists.
Timeouts: async jobs are capped by DYNAMIC_MCP_JOB_TIMEOUT_MS (default: 20 minutes). When the timeout is reached, the subprocess is terminated and the job is marked failed.
Per-tool async override example:
Claude
In your model's configuration file (e.g., ~/.claude/config.json), you can add multiple entries for the dynamic-mcp-server, each with its own configuration file.
For the dynamic MCP server with a custom config:
With a prompt prefix (string or file path):
The timeout field (in seconds) controls startup timeout. For Claude cli the default is 60 seconds. Tool execution has a hardcoded 10-minute limit.
Now, when you run your model's CLI, it will automatically start the dynamic-mcp-server for each entry and make the tools you defined in your configuration files available.
Codex
Set up codex config.toml like:
For the dynamic MCP server:
With a prompt prefix:
Make sure the tool_timeout_sec is long so that the dynamic-mcp-server can finish finish its work.
Gemini
For Gemini, you'll configure MCP servers in the gemini CLI's configuration file.
For the dynamic MCP server:
With a prompt prefix (string or file path):
Now, when you run gemini, it will automatically start the dynamic-mcp-server for each entry and make the tools you defined in your configuration files available.
This software is provided "as is" and the user assumes all risks and responsibilities associated with its use. The owner of the repository is not responsible for any issues, problems, loss, damage, or other liabilities that may arise from the use of this software.
AI Use
AI was used for some coding, testing, and documentation. Human had the original idea, original code, and performed all code reviews.
License
This project is licensed under the MIT License - see the LICENSE file for details.