Heimdall
Exports MCP traces via OTLP to Grafana Tempo for distributed tracing visualization and analysis.
Exports MCP traces via OTLP to Jaeger for distributed tracing visualization and performance monitoring.
Stores MCP server traces, including latency and message details, in a MySQL database for scalable trace querying and analysis.
Stores MCP server traces, including latency and message details, in a PostgreSQL database for scalable trace querying and analysis.
Stores MCP server traces, including latency and message details, in a local SQLite database for debugging and performance analysis.
@cardor/heimdall-mcp
Transparent proxy for any MCP server. Intercepts all JSON-RPC messages, measures latency, and stores traces in a configurable database — without touching the original server.
Visit the website to view a full explanation, examples, and other tools!
Table of Contents
How it works
flowchart LR
A["MCP Client\n(Claude Desktop / OpenCode / Cursor)"]
subgraph proxy["heimdall-mcp"]
B["TelemetryInterceptor"]
C["ForwardInterceptor"]
D[("SQLite\nPostgres\nMySQL")]
B --> C
B -->|"saves span"| D
end
S["Real MCP server\n(subprocess / HTTP / SSE)"]
A -->|"stdio"| B
C -->|"stdio · http · sse"| S
S -->|"response"| C
C -->|"response"| AThe proxy always exposes stdio to the MCP client and speaks the correct transport to the real server. Every request/response pair is converted into a span with timing, attributes, and the input/output body.
Installation
npm install -g @cardor/heimdall-mcp
# or as a project dependency
npm install @cardor/heimdall-mcpUsage modes
Mode 1 — CLI wrapping a subprocess (stdio)
The MCP client thinks it is talking to heimdall-mcp. The proxy spawns the real server as a child process and forwards all messages.
mcp.json / Claude Desktop configuration:
{
"mcpServers": {
"my-server": {
"command": "heimdall-mcp",
"args": [
"--store", "sqlite://~/.mcp-traces/traces.db",
"--", "node", "my-server.js"
]
}
}
}The -- separator divides heimdall-mcp flags from the real server command. Everything after it is executed as a subprocess.
With a globally installed server:
{
"mcpServers": {
"filesystem": {
"command": "heimdall-mcp",
"args": [
"--store", "sqlite://~/.mcp-traces/traces.db",
"--", "npx", "@modelcontextprotocol/server-filesystem", "/tmp"
]
}
}
}With Postgres instead of SQLite:
{
"mcpServers": {
"my-server": {
"command": "heimdall-mcp",
"args": [
"--store", "postgres://user:pass@localhost:5432/traces",
"--", "node", "my-server.js"
]
}
}
}Mode 2 — CLI wrapping a remote HTTP server
When the MCP server is already running and exposes an HTTP endpoint.
{
"mcpServers": {
"remote-server": {
"command": "heimdall-mcp",
"args": [
"--store", "sqlite://~/.mcp-traces/traces.db",
"--out", "http",
"--target", "http://localhost:3001"
]
}
}
}The proxy exposes stdio to the client and forwards each message as an HTTP POST to the target URL.
Mode 3 — CLI wrapping a remote SSE server
For servers that use Server-Sent Events.
{
"mcpServers": {
"sse-server": {
"command": "heimdall-mcp",
"args": [
"--store", "postgres://user:pass@host/db",
"--out", "sse",
"--target", "http://remote.example.com"
]
}
}
}The proxy connects to {target}/sse to receive responses and sends requests as POST to {target}.
Mode 4 — Library for developers
When you have access to the source code and want to integrate the proxy programmatically.
Minimal setup:
import { ProxyBuilder } from '@cardor/heimdall-mcp'
const proxy = await ProxyBuilder.create()
.inbound({ transport: 'stdio' })
.outbound({ transport: 'stdio', command: 'node', args: ['my-server.js'] })
.store('sqlite://./traces.db')
.build()
await proxy.start()
// clean shutdown
process.on('SIGINT', () => proxy.stop())stdio → remote HTTP:
const proxy = await ProxyBuilder.create()
.inbound({ transport: 'stdio' })
.outbound({ transport: 'http', url: 'http://localhost:3001' })
.store('postgres://user:pass@localhost/traces')
.build()
await proxy.start()HTTP inbound (proxy listens on a port):
const proxy = await ProxyBuilder.create()
.inbound({ transport: 'http', port: 8080 })
.outbound({ transport: 'stdio', command: 'node', args: ['server.js'] })
.store('mysql://user:pass@localhost/traces')
.build()
await proxy.start()With OTLP export and debug logging:
const proxy = await ProxyBuilder.create()
.inbound({ transport: 'stdio' })
.outbound({ transport: 'stdio', command: 'node', args: ['my-server.js'] })
.store('sqlite://./traces.db')
.otlp('http://localhost:4318/v1/traces') // export to Jaeger / Tempo / Grafana
.setDebug(true) // verbose span logs to stderr
.build()
await proxy.start()With a custom interceptor:
import type { Interceptor, InterceptorContext, JsonRpcMessage } from '@cardor/heimdall-mcp'
class LogAllInterceptor implements Interceptor {
name = 'LogAllInterceptor'
async intercept(
request: JsonRpcMessage,
context: InterceptorContext,
next: () => Promise<JsonRpcMessage>
): Promise<JsonRpcMessage> {
console.log('→', request.method, request.id)
const response = await next()
console.log('←', response.id, response.error ? 'ERROR' : 'OK')
return response
}
}
const proxy = await ProxyBuilder.create()
.inbound({ transport: 'stdio' })
.outbound({ transport: 'stdio', command: 'node', args: ['server.js'] })
.store('sqlite://./traces.db')
.build()
proxy.addInterceptor(new LogAllInterceptor())
await proxy.start()Stores
SQLite
No external server required — ideal for local development.
Valid connection strings:
sqlite://./traces.db
sqlite://~/.mcp-traces/traces.db
sqlite:///absolute/path/traces.dbDriver: @libsql/client — pure WASM, no native compilation required.
Schema:
heimdall_spans
span_id TEXT PRIMARY KEY
trace_id TEXT NOT NULL
name TEXT NOT NULL → "mcp.tool.call", "mcp.initialize", etc.
kind INTEGER → OTel SpanKind: 0=INTERNAL, 1=SERVER, 2=CLIENT, 3=PRODUCER, 4=CONSUMER
status INTEGER NOT NULL → 0=UNSET, 1=OK, 2=ERROR
status_message TEXT
start_time_unix_nano INTEGER NOT NULL → Unix nanoseconds (OTel native)
end_time_unix_nano INTEGER NOT NULL
attributes TEXT/JSON → mcp.jsonrpc.method, mcp.tool.name, mcp.transport, mcp.status, mcp.request.id, mcp.server.name, mcp.latency.*, duration.ms, etc.
events TEXT/JSON → OTel events array (e.g. error events)
links TEXT/JSON → OTel links array
resource_attributes TEXT/JSON → service.name, service.version, service.namespace (OTel semantic conventions)
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
heimdall_metrics
id INTEGER PRIMARY KEY AUTOINCREMENT
tool_name TEXT NOT NULL
call_count INTEGER DEFAULT 0
error_count INTEGER DEFAULT 0
avg_duration INTEGER
updated_at TEXT NOT NULLNote: SQLite uses
INTEGERfor nanosecond timestamps because SQLite has no nativeBIGINTtype — the integer affinity handles large values correctly.
PostgreSQL
postgres://user:pass@localhost:5432/my_db
postgresql://user:pass@localhost:5432/my_dbDriver: postgres — pure JS, no node-gyp.
Schema differences from SQLite:
start_time_unix_nano/end_time_unix_nano→BIGINT(native 64-bit, exact for nanoseconds)attributes/events/links/resource_attributes→JSONB(indexable, queryable)avg_duration→REALupdated_at→TIMESTAMPcreated_at→TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP
MySQL
mysql://user:pass@localhost:3306/my_dbDriver: mysql2.
Schema differences from SQLite:
span_id/trace_id/name→VARCHAR(64/512)(explicit lengths)start_time_unix_nano/end_time_unix_nano→BIGINT(native 64-bit, exact for nanoseconds)attributes/events/links/resource_attributes→JSONavg_duration→FLOATidin metrics →BIGINT UNSIGNED AUTO_INCREMENTupdated_at→TIMESTAMP(3)(millisecond precision)
What gets recorded
Every JSON-RPC message produces a span in the heimdall_spans table. All attributes follow the mcp.* namespace for interoperability with other MCP-aware tools.
MCP method | Span name | Key attributes |
|
|
|
|
| + |
|
| + |
|
| + |
|
| + |
|
| + |
|
| + |
|
|
|
any other |
|
|
Common attributes on every span:
Attribute | Description |
| Always |
| The JSON-RPC method name |
| JSON-RPC request ID — lets you join request/response deterministically |
|
|
|
|
| Name of the real MCP server (captured from |
| Version of the real MCP server (captured from |
| Total round-trip latency in milliseconds |
Latency breakdown (on tools/call):
Attribute | Description |
| Time the real server took to respond |
| Overhead introduced by the proxy itself |
Body capture modes:
Body capture is controlled by --body-mode (CLI) or .setBodyMode() (library). Default is redacted.
Mode |
| Also stored |
|
|
|
|
|
|
| raw JSON |
|
Use
fullonly for local development — raw bodies in shared OTLP backends can leak secrets.
Agent correlation (optional):
If the MCP client sends a _meta object inside params, heimdall-mcp will automatically extract and record these attributes:
| Span attribute |
|
|
|
|
|
|
As a fallback, the env vars MCP_CONVERSATION_ID, MCP_TURN_ID, and MCP_AGENT_RUN_ID are used if set.
On error, every span also gets mcp.error.message and mcp.error.code attributes plus an error OTel event attached to the span.
Every span's resource_attributes column contains OTel resource metadata:
service.name—@cardor/heimdall-mcpservice.version— package versionservice.namespace—mcp-proxy
The schema follows the OpenTelemetry data model natively:
Timestamps stored as Unix nanoseconds (
BIGINTin Postgres/MySQL,INTEGERin SQLite)kindis an integer SpanKind (0=INTERNAL, 1=SERVER, 2=CLIENT, 3=PRODUCER, 4=CONSUMER)statusis an integer SpanStatusCode (0=UNSET, 1=OK, 2=ERROR)JSON columns map directly to OTLP attribute bags
This means rows can be consumed directly by any OTel-compatible tool without transformation.
Jaeger UI (OTLP)
heimdall-mcp can export every span to a Jaeger instance in real time via OTLP HTTP, so you can visualize traces without querying the database directly.

1. Start Jaeger
docker run -d \
--name jaeger \
-p 16686:16686 \
-p 4318:4318 \
jaegertracing/all-in-one:latest2. Add --otlp to your config
{
"mcpServers": {
"my-server": {
"command": "heimdall-mcp",
"args": [
"--store", "sqlite://~/.mcp-traces/traces.db",
"--otlp", "http://localhost:4318/v1/traces",
"--", "node", "my-server.js"
]
}
}
}For the HTTP/SSE variant (e.g. the setup used during development of this project):
{
"mcpServers": {
"my-server": {
"command": "sh", "-c",
"args": [
"heimdall-mcp --store postgresql://user:pass@localhost:5432/db --out http --target http://localhost:3000/mcp --otlp http://localhost:4318/v1/traces"
]
}
}
}3. Open Jaeger UI
http://localhost:16686Select service heimdall-mcp and click Find Traces. Each MCP method (mcp.tool.call, mcp.initialize, mcp.tools.list, …) appears as a separate trace with full attributes and input/output event bodies.
Dark mode — append ?uiConfig={"theme":"dark"} to the URL, or mount a config file:
echo '{"uiConfig":{"theme":"dark"}}' > jaeger-ui.json
docker rm -f jaeger && docker run -d \
--name jaeger \
-p 16686:16686 \
-p 4318:4318 \
-v $(pwd)/jaeger-ui.json:/etc/jaeger/ui-config.json \
-e JAEGER_UI_CONFIG_FILE=/etc/jaeger/ui-config.json \
jaegertracing/all-in-one:latestThe
--otlpflag is additive — spans are saved to the database and exported to Jaeger at the same time.
Custom interceptors
The Interceptor interface is public. You can add your own logic into the pipeline before the telemetry interceptor:
interface Interceptor {
name: string
intercept(
request: JsonRpcMessage,
context: InterceptorContext,
next: () => Promise<JsonRpcMessage>
): Promise<JsonRpcMessage>
}
interface InterceptorContext {
startedAt: Date
traceId: string
spanId: string
bodyMode: 'redacted' | 'hash' | 'full'
transport: 'stdio' | 'http' | 'sse'
serverInfo: { name?: string; version?: string } // populated after initialize
conversationId?: string // from _meta or env var
turnId?: string
agentRunId?: string
metadata: Record<string, unknown> // shared bag between interceptors
}Calling next() passes control to the next interceptor in the chain. ForwardInterceptor is always last — it makes the actual call to the real server and records latency.proxy_to_server_ms in context.metadata for the telemetry interceptor to read.
You can use context.metadata to pass data between your interceptor and others in the same pipeline run.
CLI reference
heimdall-mcp [options] [-- command [args...]]
Options:
--store <url> Store connection string (required)
sqlite://./traces.db
postgres://user:pass@host/db
mysql://user:pass@host/db
--out <transport> Transport to the real server (default: stdio)
stdio | http | sse
--target <url> Server URL when --out is http or sse
--in <transport> Inbound transport (default: stdio)
stdio | http | sse
--in-port <port> Port for --in http or --in sse
--otlp <url> Export spans to an OTLP HTTP endpoint (e.g. Jaeger, Tempo)
Additive — spans are also saved to the store
Example: http://localhost:4318/v1/traces
--out-port <port> Port for outbound http or sse transport
--debug Write verbose logs to stderr (prints span names + trace IDs to stderr)
-V, --version Print version
-h, --help Print this help
-- Separates proxy flags from the subprocess command
(required when --out is stdio)
Examples:
# stdio proxy → subprocess
heimdall-mcp --store sqlite://./t.db -- node server.js
# stdio proxy → remote HTTP server
heimdall-mcp --store sqlite://./t.db --out http --target http://localhost:3001
# stdio proxy → remote SSE server with Postgres
heimdall-mcp --store postgres://user:pass@host/db --out sse --target http://remote.com
# with OTLP export to Jaeger
heimdall-mcp --store sqlite://./t.db --otlp http://localhost:4318/v1/traces -- node server.jsRoadmap
Phase | Feature | Status |
1 |
| 🔜 Next |
2 | Enforce allow/deny in the interceptor — blocked calls return a JSON-RPC error and are logged | 📋 Planned |
3 | Filter by action type ( | 📋 Planned |
4 | Per-MCP scoped permissions — separate | 📋 Planned |
5 | Runtime enforcement modes: | 📋 Planned |
This server cannot be installed
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/enmanuelmag/heimdall-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server