ToolMesh
OfficialAllows interaction with the GitHub API through declarative DADL configuration, enabling AI agents to list repositories, open issues, and create pull requests.
ToolMesh — Let AI agents touch real systems. Safely.
One Go binary between your agents and your infrastructure — with authorization, credential security, audit logging, and output policies on every tool call.
30 lines of YAML. No server to build.
In practice, MCP servers only expose a fraction of the REST API they wrap — and you'll hit the gaps fast. ToolMesh lets you replace the wrapper layer with .dadl files — a declarative YAML format that describes any REST API as MCP tools. No wrapper server to build, deploy, or maintain.
Current: Claude → ToolMesh → MCP Server → REST API
With DADL: Claude → ToolMesh → REST API (via .dadl file)You don't write the YAML by hand. You ask an LLM. Claude, GPT, Gemini — any model that knows the DADL spec generates a working .dadl file in seconds. Describe what you need, drop the file into config/dadl/, done.
"Create a DADL for the GitHub API — list repos, open issues, and create pull requests."
10 seconds. Works with any LLM that knows the format.
And unlike MCP gateways that just pass tool calls through, ToolMesh adds what production deployments actually need:
Authorization — fine-grained user → plan → tool control (OpenFGA)
Credential Security — secrets injected at execution time, never in prompts
Audit Trail — every tool call recorded with structured logging or queryable SQLite
Input & Output Gating — JS policies validate parameters and filter responses
The Six Pillars
Pillar | What it does | Backed by |
Any Backend | Connect MCP servers or describe REST APIs declaratively via DADL | Go MCP SDK + DADL (.dadl files) |
Code Mode | LLMs write typed JS instead of error-prone JSON | AST-parsed tool calls |
Audit | Execution trail — every tool call recorded and queryable | slog / SQLite |
OpenFGA | Fine-grained authorization (user → plan → tool) | OpenFGA |
Credential Store | Inject secrets at execution time, never in prompts | Per-request injection via Executor pipeline |
Gate | JavaScript policies validate inputs (pre) and filter outputs (post) | goja |
Quickstart
# Clone
git clone https://github.com/DunkelCloud/ToolMesh.git
cd ToolMesh
# Configure
cp .env.example .env
# IMPORTANT: Set a password — without it, all requests are rejected:
# TOOLMESH_AUTH_PASSWORD=my-secret-password
# Or set an API key for programmatic access:
# TOOLMESH_API_KEY=my-api-key
# Optional: local overrides (build locally, enable OpenFGA, HTTPS proxy, ...)
# cp docker-compose.override.yml.example docker-compose.override.yml
# # then edit docker-compose.override.yml — picked up automatically by Docker Compose
# Start (runs in bypass mode by default — no authz required)
docker compose up -d
# Verify it's running (default port: 8123)
curl http://localhost:8123/health
# MCP endpoint: http://localhost:8123/mcp
# Note: Most MCP clients require HTTPS — see TLS section belowTLS (important)
ToolMesh itself serves plain HTTP. Most MCP clients — including Claude Desktop — require HTTPS and will reject http:// URLs. You need a TLS-terminating reverse proxy in front of ToolMesh:
Option | When to use |
Caddy | Self-hosted with a public domain — automatic Let's Encrypt certs |
Cloudflare Tunnel | No open ports needed, zero-config TLS |
nginx / Traefik | Already in your stack |
For local development only, you can bypass TLS by editing claude_desktop_config.json by hand (the GUI enforces https://).
Connect to Claude Desktop
Add to your Claude Desktop MCP config:
{
"mcpServers": {
"toolmesh": {
"url": "https://toolmesh.example.com/mcp"
}
}
}For local development without TLS proxy:
{
"mcpServers": {
"toolmesh": {
"url": "http://localhost:8123/mcp"
}
}
}Connect to Claude.ai (Custom Connector)
ToolMesh supports OAuth 2.1 with PKCE S256 for remote access. Configure users in config/users.yaml and use the public HTTPS URL as the MCP endpoint.
Authentication
ToolMesh supports two authentication methods that can be used independently or together. All OAuth state (tokens, auth codes, clients) is persisted in Redis and survives server restarts.
OAuth 2.1 (Interactive Login)
Define users in config/users.yaml with bcrypt-hashed passwords:
users:
- username: admin
password_hash: "$2a$10$..."
company: dunkelcloud
plan: pro
roles: [admin]Generate password hashes with any bcrypt-capable utility:
htpasswd -nbBC 10 "" "my-password" | cut -d: -f2For single-user setups, TOOLMESH_AUTH_PASSWORD still works as a fallback. Configure the identity with TOOLMESH_AUTH_USER, TOOLMESH_AUTH_PLAN, and TOOLMESH_AUTH_ROLES (defaults: owner, pro, admin).
API Keys (Programmatic Access)
Define API keys in config/apikeys.yaml with bcrypt-hashed keys:
keys:
- key_hash: "$2a$10$..."
user_id: claude-code-user
company_id: dunkelcloud
plan: pro
roles: [tool-executor]Each key maps to a distinct user identity with its own plan and roles, which flow through to OpenFGA authorization.
For single-key setups, TOOLMESH_API_KEY still works as a fallback. The same TOOLMESH_AUTH_USER, TOOLMESH_AUTH_PLAN, and TOOLMESH_AUTH_ROLES variables control the identity.
DCR Rate Limiting
Dynamic Client Registration is rate-limited to 5 registrations per hour per IP to prevent abuse.
Authorization Mode
OPENFGA_MODE controls whether OpenFGA authorization is enforced:
Mode | Behavior |
| All tool calls are allowed without authz checks |
| OpenFGA enforces user → plan → tool authorization (requires |
Start with bypass to get running quickly, then switch to restrict after bootstrapping OpenFGA.
Configuration
See docs/configuration.md for all environment variables.
Timeout tuning
Variable | Default | Description |
|
| HTTP client timeout (seconds) for calls to downstream MCP servers |
|
| Tool execution timeout (seconds) — context deadline for backend calls |
Increase these for backends that need more time (e.g. browser-based web fetchers):
TOOLMESH_MCP_TIMEOUT=180
TOOLMESH_EXEC_TIMEOUT=180Logging
ToolMesh uses structured logging via slog. The default level is debug for full MCP traceability out of the box — set LOG_LEVEL=info or higher for production since debug logs include complete request/response payloads. Per-backend debug files, log formats, and all logging variables are documented in docs/configuration.md.
Architecture
See docs/architecture.md for the full architecture documentation.
┌─────────────────────────────────┐
│ ToolMesh │
│ │
│ Redis · OpenFGA · Audit │
│ Credential Store · JS Gate │
│ │
AI Agent ──MCP──────────▶ │ AuthZ ▸ Creds ▸ Gate ▸ Exec │
│ │
└──┬──────┬───────┬───────┬───────┘
│ │ │ │
MCP Client .dadl .dadl .dadl
│ │ │ │
▼ ▼ ▼ ▼
MCP Stripe GitHub Vikunja
Server API API APIAdding an External MCP Server
Create or edit config/backends.yaml:
backends:
- name: memorizer
transport: http
url: "https://memorizer.example.com/mcp"
api_key_env: "MEMORIZER_API_KEY"Set the credential as an environment variable:
CREDENTIAL_MEMORIZER_API_KEY=sk-mem-xxxxxTools from each backend are exposed with a prefix (e.g. memorizer_retrieve_knowledge). Credentials are injected by the Executor at runtime via the CredentialStore — the LLM never sees API keys.
REST Proxy Mode (DADL)
When an MCP server doesn't expose an endpoint you need, describe it in a .dadl file and ToolMesh calls the REST API directly — no wrapper server needed. Both modes run in parallel.
Add a REST backend to config/backends.yaml:
backends:
- name: vikunja
transport: rest
dadl: /app/dadl/vikunja.dadl
url: "https://vikunja.example.com/api/v1"Want Claude to list GitHub issues? Here's all it takes:
tools:
list_issues:
method: GET
path: /repos/{owner}/{repo}/issues
description: "List issues for a repository"
params:
owner: { type: string, in: path, required: true }
repo: { type: string, in: path, required: true }
state: { type: string, in: query }ToolMesh handles auth, pagination, retries, and error mapping. DADL supports bearer tokens, OAuth2, session auth, API keys, automatic pagination, retry with backoff, response transformation, composite tools, and more.
For the full spec, examples, and the community registry, see dadl.ai. The fastest way to create a .dadl file is asking any LLM that knows the format.
Code Mode
Instead of raw JSON tool calls, LLMs can use typed JavaScript:
// List available tools with TypeScript definitions
const tools = await toolmesh.list_tools();
// Execute tools with typed parameters
const result = await toolmesh.memorizer_retrieve_knowledge({
query: "project architecture",
top_k: 5
});ToolMesh parses the code, extracts tool calls, and routes them through the full execution pipeline (AuthZ → Credentials → Gate pre → Backend → Gate post).
Extension Model
ToolMesh uses a registry-based extension model inspired by Go's database/sql driver pattern. Three component types are extensible via init() registration:
Component | Built-in | Config |
Credential Store |
|
|
Tool Backend |
|
|
Gate Evaluator |
|
|
Enterprise extensions (InfisicalStore, VaultStore, Compliance-LLM, etc.) are available separately and included via Go build tags: go build -tags enterprise ./cmd/toolmesh.
See docs/architecture.md for details.
Contributing
See CONTRIBUTING.md.
License
Apache 2.0 — Copyright 2025–2026 Dunkel Cloud GmbH
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/DunkelCloud/ToolMesh'
If you have feedback or need assistance with the MCP directory API, please join our Discord server