Enables AI-powered CAD automation in Fusion 360, allowing creation of 3D models, sketches, extrusions, patterns, fillets, and other design operations through natural language prompts
Connects to Google's Gemini AI models to generate Fusion 360 CAD code and provide intelligent design assistance
Provides local, offline AI model support for generating Fusion 360 CAD code, offering privacy-focused AI assistance for design automation
Integrates GPT-4 and GPT-3.5-turbo models for generating Fusion 360 CAD code and providing AI-assisted design capabilities
Manages conversation persistence, design history tracking, and context memory for AI-assisted CAD sessions
Fusion 360 MCP - Production Ready π
AI-Powered CAD Automation with Modern Chat Interface
A production-ready framework for AI-assisted design in Fusion 360, featuring a modern web-based chat UI, multiple LLM backends, and enhanced accuracy for professional CAD automation.
β¨ Features
π¨ Modern Chat Interface
- Beautiful, responsive web UI 
- Real-time WebSocket communication 
- Code preview with syntax highlighting 
- One-click code execution 
- Conversation history and persistence 
π€ Multiple AI Backends
- Ollama - Local, offline, privacy-focused 
- OpenAI - GPT-4 and GPT-3.5-turbo 
- Google Gemini - Latest Gemini models 
π Enhanced Safety & Accuracy
- Advanced code validation and syntax checking 
- Security filtering for dangerous operations 
- Improved prompt engineering for better results 
- Unit conversion handling (mm β cm) 
- Comprehensive error handling and logging 
πΎ Conversation Management
- SQLite-based conversation persistence 
- Short-term and long-term context memory 
- Automatic conversation summarization 
- Design history tracking 
π Production Features
- WebSocket server with auto-reconnection 
- Configurable settings via JSON 
- Comprehensive logging system 
- Retry mechanism with exponential backoff 
- Real-time execution feedback 
π Prerequisites
- Autodesk Fusion 360 (Windows or Mac) 
- Python 3.9+ (System Python, NOT Fusion's embedded Python) 
- LLM Backend (choose one): - Ollama (recommended for local use) 
- OpenAI API key 
- Google Gemini API key 
 
Important: The server runs on your system Python to avoid code signing issues with Fusion 360's embedded Python.
π Quick Start
1. Installation
Clone or Download
Install Dependencies
Use your system Python (not Fusion's Python):
macOS/Linux:
Windows:
Note: The previous setup.sh/setup.bat scripts installed to Fusion's Python, which has code signing restrictions. Use system Python instead.
2. For Ollama Users (Recommended)
3. Start the Bridge in Fusion 360
To enable code execution from web UI:
- Open Fusion 360 
- Go to Tools > Add-Ins > Scripts and Add-Ins 
- Click Scripts tab 
- Select fusion_bridge script 
- Click Run 
- You'll see "Fusion MCP Bridge started!" message 
Keep Fusion 360 open with the bridge running.
4. Start the Web Server
Then open http://localhost:8888 in your browser.
π Usage Guide
Chat Interface
- Select AI Backend: Choose Ollama, OpenAI, or Gemini from sidebar 
- Configure Model: Select from available models 
- Enter Prompt: Describe what you want to create 
- Review Code: AI-generated code appears with syntax highlighting 
- Execute: Click "Execute" to run the code in Fusion 360 
Example Prompts
Tips for Accuracy
- Be Specific: Include exact dimensions and units 
- Use Standard Terms: Use CAD terminology (extrude, sketch, pattern, etc.) 
- Specify Location: Mention origin, planes, or reference geometry 
- One Operation: Focus on one design operation per prompt 
- Units: Always specify mm, cm, or inches 
ποΈ Architecture
Key Components
fusion_mcp_core.py
- AI interface with multiple backends 
- Enhanced context management 
- Improved validation and execution 
- Advanced error handling 
server.py
- WebSocket server for real-time communication 
- SQLite database for persistence 
- Model management and API integration 
- Async request handling 
chat_ui.html + chat_ui.js
- Modern, responsive UI 
- Real-time messaging 
- Code preview and execution 
- Conversation management 
βοΈ Configuration
Edit config.json to customize:
π Logging
Logs are saved to your home directory:
- ~/mcp_server.log- Server activity and errors
- ~/mcp_core.log- Core AI and execution logs
- ~/mcp_conversations.db- SQLite conversation database
π§ Troubleshooting
Ollama Not Responding
Port Already in Use
Dependencies Not Found
Code Execution Fails
- Ensure Fusion 360 has an active document 
- Check logs for detailed error messages 
- Verify code doesn't use forbidden operations 
Browser Doesn't Open
- Manually navigate to - http://localhost:8080
- Check firewall settings 
- Verify server is running (check terminal output) 
π Security
- Code Validation: Filters dangerous operations 
- Sandbox Execution: Controlled execution environment 
- API Key Safety: Never logged or stored in plain text 
- Input Sanitization: All user inputs are validated 
π Advanced Usage
Custom Plugins
API Integration
Batch Processing
π Performance
- Response Time: 1-5 seconds (local Ollama) 
- Accuracy: 90%+ for standard operations 
- Supported Operations: 100+ Fusion 360 API operations 
- Concurrent Users: Up to 10 simultaneous connections 
π€ Contributing
Contributions are welcome! Please:
- Fork the repository 
- Create a feature branch 
- Make your changes 
- Submit a pull request 
π License
MIT License - see LICENSE file for details
π Acknowledgments
- Fusion 360 API documentation 
- Ollama team for local LLM support 
- OpenAI and Google for their AI models 
- Community contributors 
π Support
- Issues: GitHub Issues 
- Discussions: GitHub Discussions 
- Email: support@example.com 
πΊοΈ Roadmap
- Streaming responses for better UX 
- Multi-language support 
- Voice input integration 
- 3D preview in chat 
- Collaborative design sessions 
- Cloud deployment option 
- Mobile app companion 
Made with β€οΈ for the Fusion 360 community
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
Enables AI-powered CAD automation in Autodesk Fusion 360 through natural language prompts. Features a modern web chat interface with multiple LLM backends for creating 3D models, sketches, and parametric designs.