Skip to main content
Glama

Fusion360MCP

by jaskirat1616

Fusion 360 MCP - Production Ready πŸš€

AI-Powered CAD Automation with Modern Chat Interface

A production-ready framework for AI-assisted design in Fusion 360, featuring a modern web-based chat UI, multiple LLM backends, and enhanced accuracy for professional CAD automation.

Version License Python

✨ Features

🎨 Modern Chat Interface

  • Beautiful, responsive web UI

  • Real-time WebSocket communication

  • Code preview with syntax highlighting

  • One-click code execution

  • Conversation history and persistence

πŸ€– Multiple AI Backends

  • Ollama - Local, offline, privacy-focused

  • OpenAI - GPT-4 and GPT-3.5-turbo

  • Google Gemini - Latest Gemini models

πŸ”’ Enhanced Safety & Accuracy

  • Advanced code validation and syntax checking

  • Security filtering for dangerous operations

  • Improved prompt engineering for better results

  • Unit conversion handling (mm ↔ cm)

  • Comprehensive error handling and logging

πŸ’Ύ Conversation Management

  • SQLite-based conversation persistence

  • Short-term and long-term context memory

  • Automatic conversation summarization

  • Design history tracking

πŸ“Š Production Features

  • WebSocket server with auto-reconnection

  • Configurable settings via JSON

  • Comprehensive logging system

  • Retry mechanism with exponential backoff

  • Real-time execution feedback

πŸ“‹ Prerequisites

  • Autodesk Fusion 360 (Windows or Mac)

  • Python 3.9+ (System Python, NOT Fusion's embedded Python)

  • LLM Backend (choose one):

    • Ollama (recommended for local use)

    • OpenAI API key

    • Google Gemini API key

Important: The server runs on your system Python to avoid code signing issues with Fusion 360's embedded Python.

πŸš€ Quick Start

1. Installation

Clone or Download

git clone <repository-url> cd "fusion mcc"

Install Dependencies

Use your system Python (not Fusion's Python):

macOS/Linux:

python3 -m pip install -r requirements.txt

Windows:

python -m pip install -r requirements.txt

Note: The previous setup.sh/setup.bat scripts installed to Fusion's Python, which has code signing restrictions. Use system Python instead.

2. For Ollama Users (Recommended)

# Install Ollama brew install ollama # macOS # or download from https://ollama.com/ # Start Ollama server ollama serve # Download a model ollama pull llama3 # or for better code generation: ollama pull codellama

3. Start the Bridge in Fusion 360

To enable code execution from web UI:

  1. Open Fusion 360

  2. Go to Tools > Add-Ins > Scripts and Add-Ins

  3. Click Scripts tab

  4. Select fusion_bridge script

  5. Click Run

  6. You'll see "Fusion MCP Bridge started!" message

Keep Fusion 360 open with the bridge running.

4. Start the Web Server

cd "path/to/fusion mcc" python3 server.py # Or use the helper script: ./start_server.sh

Then open http://localhost:8888 in your browser.

πŸ“– Usage Guide

Chat Interface

  1. Select AI Backend: Choose Ollama, OpenAI, or Gemini from sidebar

  2. Configure Model: Select from available models

  3. Enter Prompt: Describe what you want to create

  4. Review Code: AI-generated code appears with syntax highlighting

  5. Execute: Click "Execute" to run the code in Fusion 360

Example Prompts

βœ… "Create a 10mm cube at the origin" βœ… "Create a cylinder with 20mm diameter and 50mm height" βœ… "Create a rectangular pattern of 5x3 holes, each 3mm diameter, spaced 10mm apart" βœ… "Add a 2mm fillet to all edges of the selected body" βœ… "Create a parametric gear with 20 teeth and 5mm module"

Tips for Accuracy

  1. Be Specific: Include exact dimensions and units

  2. Use Standard Terms: Use CAD terminology (extrude, sketch, pattern, etc.)

  3. Specify Location: Mention origin, planes, or reference geometry

  4. One Operation: Focus on one design operation per prompt

  5. Units: Always specify mm, cm, or inches

πŸ—οΈ Architecture

fusion mcc/ β”œβ”€β”€ fusion mcc.py # Main Fusion 360 script (launcher) β”œβ”€β”€ fusion_mcp_core.py # Core AI/execution logic β”œβ”€β”€ server.py # WebSocket/HTTP server β”œβ”€β”€ chat_ui.html # Modern chat interface β”œβ”€β”€ chat_ui.js # Client-side JavaScript β”œβ”€β”€ config.json # Configuration settings β”œβ”€β”€ requirements.txt # Python dependencies β”œβ”€β”€ setup.sh / setup.bat # Setup scripts └── README.md # This file

Key Components

fusion_mcp_core.py

  • AI interface with multiple backends

  • Enhanced context management

  • Improved validation and execution

  • Advanced error handling

server.py

  • WebSocket server for real-time communication

  • SQLite database for persistence

  • Model management and API integration

  • Async request handling

chat_ui.html + chat_ui.js

  • Modern, responsive UI

  • Real-time messaging

  • Code preview and execution

  • Conversation management

βš™οΈ Configuration

Edit config.json to customize:

{ "server": { "host": "0.0.0.0", "port": 8080 }, "ai": { "default_backend": "ollama", "temperature": 0.3, "max_tokens": 2000 }, "validation": { "forbidden_keywords": [...], "require_adsk_import": true } }

πŸ“ Logging

Logs are saved to your home directory:

  • ~/mcp_server.log - Server activity and errors

  • ~/mcp_core.log - Core AI and execution logs

  • ~/mcp_conversations.db - SQLite conversation database

πŸ”§ Troubleshooting

Ollama Not Responding

# Start Ollama server ollama serve # Check if server is running curl http://localhost:11434/api/tags

Port Already in Use

# Change port in config.json or use: python server.py --port 8081

Dependencies Not Found

# Manually install to Fusion Python <fusion_python_path> -m pip install -r requirements.txt

Code Execution Fails

  • Ensure Fusion 360 has an active document

  • Check logs for detailed error messages

  • Verify code doesn't use forbidden operations

Browser Doesn't Open

  • Manually navigate to http://localhost:8080

  • Check firewall settings

  • Verify server is running (check terminal output)

πŸ”’ Security

  • Code Validation: Filters dangerous operations

  • Sandbox Execution: Controlled execution environment

  • API Key Safety: Never logged or stored in plain text

  • Input Sanitization: All user inputs are validated

πŸš€ Advanced Usage

Custom Plugins

# Register custom plugin def my_plugin(**kwargs): # Your plugin logic return result mcp.plugin_mgr.register_plugin('my_plugin', my_plugin)

API Integration

# Use core directly from fusion_mcp_core import FusionMCPCore mcp = FusionMCPCore(ai_backend='ollama', model='llama3') response, code, result = mcp.process_prompt_detailed("Create a cube")

Batch Processing

# Process multiple prompts prompts = ["Create a cube", "Add fillet", "Create hole"] for prompt in prompts: response, result = mcp.process_prompt(prompt)

πŸ“Š Performance

  • Response Time: 1-5 seconds (local Ollama)

  • Accuracy: 90%+ for standard operations

  • Supported Operations: 100+ Fusion 360 API operations

  • Concurrent Users: Up to 10 simultaneous connections

🀝 Contributing

Contributions are welcome! Please:

  1. Fork the repository

  2. Create a feature branch

  3. Make your changes

  4. Submit a pull request

πŸ“„ License

MIT License - see LICENSE file for details

πŸ™ Acknowledgments

  • Fusion 360 API documentation

  • Ollama team for local LLM support

  • OpenAI and Google for their AI models

  • Community contributors

πŸ“ž Support

πŸ—ΊοΈ Roadmap

  • Streaming responses for better UX

  • Multi-language support

  • Voice input integration

  • 3D preview in chat

  • Collaborative design sessions

  • Cloud deployment option

  • Mobile app companion


Made with ❀️ for the Fusion 360 community

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jaskirat1616/Fusion360MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server