Enables communication between Unity and local Large Language Models (LLMs) running through Ollama, allowing developers to automate Unity workflows, manipulate assets, and control the Unity Editor programmatically without cloud-based LLMs.
Provides bidirectional communication with Unity for asset management, scene control, material editing, script integration, and editor automation functions like undo, redo, play, and build operations.
Unity MCP with Ollama Integration
A Unity MCP (Model Context Protocol) package that enables seamless communication between Unity and local Large Language Models (LLMs) via Ollama. This package extends justinpbarnett/unity-mcp to work with local LLMs, allowing developers to automate workflows, manipulate assets, and control the Unity Editor programmatically without relying on cloud-based LLMs.
Overview
The Unity MCP with Ollama Integration provides a bidirectional communication channel between:
Unity (via C#)
A Python MCP server
Local LLMs running through Ollama
This enables:
Asset Management: Create, import, and manipulate Unity assets programmatically
Scene Control: Manage scenes, objects, and their properties
Material Editing: Modify materials and their properties
Script Integration: View, create, and update Unity scripts
Editor Automation: Control Unity Editor functions like undo, redo, play, and build
All powered by your own local LLMs, with no need for an internet connection or API keys.
Supported Models
This implementation is specifically configured to work with the following Ollama models:
deepseek-r1:14b - A 14 billion parameter model with strong reasoning capabilities
gemma3:12b - Google's 12 billion parameter model with good general capabilities
You can easily switch between these models in the Unity MCP window.
Installation (Asset Method)
Due to Unity's package manager compatibility issues, we recommend using the Asset Method for installation.
Prerequisites
Unity 2020.3 LTS or newer
Python 3.10 or newer
Ollama installed on your system
The following LLM models pulled in Ollama:
ollama pull deepseek-r1:14b
ollama pull gemma3:12b
Step 1: Download and Install Editor Scripts
Download or clone this repository:
git clone https://github.com/ZundamonnoVRChatkaisetu/unity-mcp-ollama.gitCreate a folder in your Unity project's Assets directory:
Assets/UnityMCPOllamaCopy the
Editor
folder from the cloned repository to your Unity project:# Copy the entire Editor folder [Repository]/Editor → Assets/UnityMCPOllama/EditorVerify the folder structure is correct:
Assets/ UnityMCPOllama/ Editor/ MCPEditorWindow.cs UnityMCPBridge.csLet Unity import and compile the scripts
Step 2: Set Up Python Environment
Create a folder for the Python environment (outside your Unity project):
mkdir PythonMCP cd PythonMCPCopy the Python folder from the cloned repository:
cp -r [Repository]/Python .Create and activate a virtual environment:
# Create a virtual environment python -m venv venv # Activate the virtual environment # On Windows: venv\Scripts\activate # On macOS/Linux: source venv/bin/activateInstall dependencies:
cd Python pip install -e .
Step 3: Configure Ollama
Ensure Ollama is installed and running on your system
Pull the supported models:
ollama pull deepseek-r1:14b ollama pull gemma3:12bStart Ollama server:
ollama serve
Using Unity MCP with Ollama
Step 1: Start Unity Bridge
Open your Unity project
Navigate to
Window > Unity MCP
to open the MCP windowClick the Start Bridge button to start the Unity bridge
Step 2: Start Python Server
Open a command prompt or terminal
Navigate to your Python environment:
cd PythonMCPActivate the virtual environment:
# On Windows: venv\Scripts\activate # On macOS/Linux: source venv/bin/activateNavigate to the Python directory and start the server:
cd Python python server.py
Step 3: Configure Ollama Settings
In the Unity MCP window, locate the Ollama Configuration section
Verify or update the following settings:
Host: localhost (default)
Port: 11434 (default)
Model: Select either
deepseek-r1:14b
orgemma3:12b
Temperature: Adjust as needed (0.0-1.0)
Click Apply Ollama Configuration
Step 4: Use the Chat Interface
Click the Show Chat Interface button in the Unity MCP window
Type your instructions in the message field
Click Send to process your request
Example prompts:
"Create a red cube at position (0, 1, 0)"
"Add a sphere to the scene and apply a blue material"
"List all objects in the current scene"
"Write a simple movement script and attach it to the cube"
Connection Status Indicators
The Unity MCP window provides status information for each component:
Python Server Status: Indicates whether the Python server is running
Green: Connected
Yellow: Connected but with issues
Red: Not connected
Unity Bridge Status: Shows if the Unity socket server is running
Running: Unity is listening for connections
Stopped: Unity socket server is not active
Ollama Status: Shows the connection status to Ollama
Connected: Successfully connected to Ollama server
Not Connected: Unable to connect to Ollama
Troubleshooting
Common Issues
"Not Connected" Status for Python Server
Ensure the Python server is running (
python server.py
)Check for errors in the Python console
Verify the Unity Bridge is running
Cannot find Unity MCP menu
Make sure the Editor scripts are properly imported in your project
Check the Unity console for any errors
Restart Unity if necessary
Ollama Connection Issues
Verify Ollama is running with
ollama serve
Check that models are properly pulled
Ensure no firewall is blocking port 11434
MCP Command Execution Fails
Check Python console for detailed error messages
Verify that the Unity Bridge is running
Make sure the prompt is clear and specific
Explicit Setup Instructions for Python Environment
If you encounter issues setting up the Python environment:
Install Python 3.10 or newer
Install Ollama from ollama.ai
Create a dedicated directory for the Python environment:
mkdir C:\PythonMCP cd C:\PythonMCPClone or download this repository and copy the Python folder:
git clone https://github.com/ZundamonnoVRChatkaisetu/unity-mcp-ollama.git copy unity-mcp-ollama\Python .Create a virtual environment:
python -m venv venvActivate the virtual environment:
venv\Scripts\activateInstall dependencies:
cd Python pip install -e .Run the server:
python server.py
Performance Considerations
Local LLM performance depends on your hardware:
For deepseek-r1:14b: Recommended minimum 12GB VRAM
For gemma3:12b: Recommended minimum 10GB VRAM
CPU-only operation is possible but will be significantly slower
Contributing
Contributions are welcome! Please feel free to submit a Pull Request or open an Issue.
License
This project is licensed under the MIT License.
Acknowledgments
Based on justinpbarnett/unity-mcp
Uses Ollama for local LLM integration
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
A server that connects Unity with local large language models through Ollama, enabling developers to automate workflows, manipulate assets, and control the Unity Editor programmatically without relying on cloud-based LLMs.
Related MCP Servers
- -securityAlicense-qualityEnables seamless integration between Ollama's local LLM models and MCP-compatible applications, supporting model management and chat interactions.Last updated -70103AGPL 3.0
- -securityAlicense-qualityA bridge enabling seamless communication between Unity and Large Language Models via the Model Context Protocol, allowing developers to automate workflows, manipulate assets, and control the Unity Editor programmatically.Last updated -3,363MIT License
- -securityFlicense-qualityA server that enables seamless integration between local Ollama LLM instances and MCP-compatible applications, providing advanced task decomposition, evaluation, and workflow management capabilities.Last updated -6
- -securityAlicense-qualityA Model Context Protocol server that provides standardized interfaces for interacting with Ollama API, offering JSON responses, error handling, and intelligent guidance for LLM-based API calls.Last updated -MIT License