Enables scene inspection, object manipulation, viewport screenshots, and code execution in Blender through a persistent TCP connection to the Blender MCP add-on
Provides unified text generation routing through LiteLLM to access OpenAI's language models alongside other LLM providers
Offers helper tools for working with Sketchfab 3D models and assets through the Blender bridge integration
Blender MCP Router
FastMCP server that exposes two layers of functionality:
LLM routing through LiteLLM so a single FastMCP tool can reach OpenAI, Anthropic, xAI, or any other LiteLLM-supported provider.
Blender bridge that proxies to the Blender MCP add-on over a persistent TCP socket, providing scene inspection, PolyHaven / Sketchfab helpers, and Hyper3D automation.
The server is designed for FastMCP Hub distribution: pyproject.toml
defines the package, the MCP endpoint is hosted via FastMCP, and an optional REST shim is exposed for the Blender add-on.
Requirements
Python 3.10+
Blender MCP add-on running locally (for the Blender tools)
API keys for any LLM providers you plan to route through LiteLLM
Install dependencies via:
Configuration
Copy .env.example
to .env
and fill in the values.
Variable | Purpose |
| Used by LiteLLM when routing to OpenAI models |
| Used for xAI (Grok) requests via LiteLLM |
| Used for Anthropics models via LiteLLM |
| Optional override for the
alias |
| Optional override for the
alias |
| Optional override for the
alias |
| Shared secret for REST shim (
header) |
All LLM-specific environment variables supported by LiteLLM can be passed through here as well (see LiteLLM docs for provider-specific keys).
Running
After configuration, start the server via the script entry point:
The process starts two services:
FastMCP HTTP endpoint on
127.0.0.1:8974/mcp
REST bridge for the Blender add-on on
127.0.0.1:8975
Both services are started inside server.main()
so FastMCP Hub (or pipx run blender-mcp-router
) can launch them.
MCP Tools
server.py
registers the following FastMCP tools:
generate_text
: Unified text generation routed through LiteLLMBlender tools:
get_scene_info
,get_object_info
,get_viewport_screenshot
,execute_blender_code
, PolyHaven/Sketchfab helpers, and Hyper3D automation helpers
Each Blender tool forwards to the Blender MCP add-on using a JSON-over-TCP API. See that add-on for port configuration (default localhost:9876
).
REST Shim /tools/call
The REST API exposes a subset of the MCP tools so non-MCP clients (like the Blender add-on) can call them. Requests must include an X-Token
header if MCP_REST_TOKEN
is set. The response format mirrors MCP content
objects (text
, json
, image
).
Health Check
GET /health
returns { "ok": true }
so deployment targets can monitor the process.
Development
Run linting/formatting as desired (none enforced yet).
The LiteLLM dependency keeps provider selection abstract; add more aliases in
MODEL_MAP
as needed.Additional tools can be exposed by adding
@mcp.tool()
functions and listing them in_HTTP_EXPOSED_TOOL_NAMES
when required by the REST shim.
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
Enables LLM routing through multiple providers (OpenAI, Anthropic, xAI) via LiteLLM and provides a bridge to Blender for 3D scene management, asset integration from PolyHaven/Sketchfab, and automation workflows. Combines unified text generation with comprehensive Blender integration through a persistent TCP connection.