MCP-CAN: Virtual CAN + MCP Server
An MCP server purpose-built to surface vehicle CAN/OBD data to an LLM/SLM. It simulates ECUs on a virtual CAN bus, decodes via a DBC, and exposes MCP tools over SSE—no hardware required by default.
Highlights
MCP server for CAN/OBD → LLM/SLM (tools + DBC metadata over SSE).
Virtual CAN backend (python-can) out of the box; optional SocketCAN/vCAN on Linux.
DBC-driven encoding/decoding via
cantools.ECU simulator that streams multiple messages plus demo OBD-II responses.
MCP server (SSE) exposing tools for frames, filtering, monitoring, and DBC info.
Typer CLI:
mcp-can(simulate, server, frames, decode, monitor, obd-request).Dockerfile + docker compose for server + simulator.
Unit tests, type hints, lint config (ruff, mypy).
Repository Layout
src/mcp_can/cli.py– Typer commandsbus.py– python-can helpersdbc.py– DBC loading/decodingconfig.py– env settings (MCP_CAN_*)models.py– simple dataclassessimulator/runner.py– ECU simulator + OBD responderserver/fastmcp_server.py– MCP tools (SSE)obd.py– minimal OBD-II request/response helpers
vehicle.dbc– sample CAN databasesimulate-ecus.py,can-mcp.py– entrypointsdocker/compose.yml,Dockerfiletests/– unit tests
Prerequisites
Python 3.10+
(Optional) Docker / Docker Compose
(Optional) Ollama if you want a local LLM backend
Install (Python)
From repo root:
Quickstart (Simulator + MCP Server)
Two terminals:
Single-process (helps on Windows if virtual backend doesn’t share across processes):
Sample interactions:
MCP Inspector (GUI for your tools)
Use the official Inspector to explore and call your MCP tools without writing a host:
When prompted, connect to your server:
URL:
http://localhost:6278/sse
You can then:
List tools and resources (
read_can_frames,decode_can_frame,filter_frames,monitor_signal,dbc_info).Call a tool (e.g., monitor
ENGINE_SPEEDfor 5 seconds) and view JSON output live.
Using with Ollama (local LLM)
Ensure Ollama is running:
ollama serveand pull a model:ollama pull llama3Run simulator + MCP server (see Quickstart).
Point your MCP-capable host at
http://localhost:6278/sseand configure its model endpoint tohttp://localhost:11434with your model name (e.g.,llama3).Prompt the host: “Monitor ENGINE_SPEED for 5 seconds” or “List all DBC messages.”
If you need a minimal host, pair @modelcontextprotocol/sdk with Ollama (see SDK docs) or use Inspector for manual tool calls.
Example host config (OpenAI-compatible endpoint to local Ollama):
CLI Reference
mcp-can simulate– start ECU simulator usingvehicle.dbc.mcp-can server [--port 6278]– run MCP SSE server.mcp-can frames --seconds 1.0– capture raw frames as JSON.mcp-can decode --id <hex|int> --data <bytes>– decode a single frame.mcp-can monitor --signal <NAME> --seconds 2.0– watch one signal.mcp-can obd-request --service <hex|int> [--pid <hex|int>]– demo OBD-II request.
Configuration
Env vars (prefix MCP_CAN_):
CAN_INTERFACE(defaultvirtual)CAN_CHANNEL(defaultbus0)DBC_PATH(defaultvehicle.dbc)MCP_PORT(default6278)
You can set these in a .env file at repo root.
Docker
Build:
Run (combined server + simulator):
Compose (from docker/):
Development & Testing
Troubleshooting
No frames? Ensure both simulator and server use the same interface/channel (
virtual/bus0by default).DBC missing? Set
MCP_CAN_DBC_PATHor placevehicle.dbcin repo root.Docker networking: expose
6278so your MCP host can reach SSE.
License
MIT (see LICENSE). Educational/prototyping use only—use certified hardware for real automotive work.
This server cannot be installed