Enables AI-powered parametric CAD design in Fusion 360 through natural language commands, allowing users to create 3D models, apply operations like extrusions and fillets, and manage design parameters using conversational AI
Integrates Google Gemini models for spatial reasoning and geometric design tasks, converting user prompts into structured Fusion 360 actions
Provides local AI model support for offline CAD design assistance, enabling privacy-focused parametric design generation without cloud dependencies
Integrates OpenAI's GPT-4o models for converting natural language design requests into parametric CAD operations and geometric constructions
Fusion 360 MCP - Multi-Model AI Integration
FusionMCP is a comprehensive Model Context Protocol (MCP) integration layer that connects Autodesk Fusion 360 with multiple AI backends (Ollama, OpenAI, Google Gemini, and Anthropic Claude) to enable AI-powered parametric CAD design through natural language.
π― Features
π€ Multi-Model Support: Seamlessly switch between Ollama, OpenAI GPT-4o, Google Gemini, and Claude 3.5
π Intelligent Routing: Automatic fallback chain when primary model fails
π Parametric Design: AI understands and generates parametric CAD operations
π‘οΈ Safety First: Built-in validation for dimensions, units, and geometric feasibility
πΎ Context Caching: Conversation and design state persistence (JSON/SQLite)
π¨ Fusion 360 Integration: Native add-in for seamless workflow
β‘ Async Architecture: Fast, non-blocking operations with retry logic
π Structured Logging: Detailed logs with Loguru
π Table of Contents
ποΈ Architecture
Component Overview
Fusion 360 Add-in (
fusion_addin/
)Python-based Fusion 360 add-in
Captures user intent and design context
Executes structured CAD actions
Real-time UI feedback
MCP Server (
mcp_server/
)FastAPI-based REST server
Routes requests to appropriate LLM
Validates and normalizes responses
Caches conversation history
LLM Clients (
mcp_server/llm_clients/
)Unified interface for all models
Provider-specific implementations
Automatic retry and error handling
System Prompt (
prompts/system_prompt.md
)Defines FusionMCP personality
Enforces JSON output format
Provides action schema templates
π Installation
Prerequisites
Python 3.11+ (for MCP server)
Autodesk Fusion 360 (2025 version recommended)
At least one LLM provider:
Ollama (local, free)
Step 1: Clone Repository
Step 2: Install Python Dependencies
Step 3: Configure Environment
Create config.json
from example:
Edit config.json
with your API keys:
Alternative: Use environment variables (.env
file):
Step 4: Install Fusion 360 Add-in
Copy
fusion_addin/
folder to Fusion 360 add-ins directory:Windows:
%APPDATA%\Autodesk\Autodesk Fusion 360\API\AddIns\
macOS:
~/Library/Application Support/Autodesk/Autodesk Fusion 360/API/AddIns/
Rename to
FusionMCP
:cp -r fusion_addin "/Users/YOUR_USER/Library/Application Support/Autodesk/Autodesk Fusion 360/API/AddIns/FusionMCP"Restart Fusion 360
Open Fusion 360 β Scripts and Add-Ins β Add-Ins tab β Select FusionMCP β Run
π¬ Quick Start
1. Start MCP Server
Expected output:
2. Test Server (Optional)
3. Use in Fusion 360
Open Fusion 360
Click Scripts and Add-Ins β Add-Ins β FusionMCP β Run
Click MCP Assistant button in toolbar
Enter natural language command:
"Create a 20mm cube"
"Design a mounting bracket with 4 holes"
"Make a cylindrical shaft 10mm diameter, 50mm long"
βοΈ Configuration
Full Configuration Options
π‘ Usage Examples
Example 1: Simple Geometry
Prompt: "Create a 20mm cube"
Generated Action:
Example 2: Complex Design
Prompt: "Design a mounting bracket 100x50mm with 4 M5 mounting holes"
Generated Action Sequence:
Example 3: Parametric Design
Prompt: "Create a shaft with diameter 2x of length"
π‘ API Reference
Endpoints
POST /mcp/command
Execute MCP command.
Request Body:
Response:
GET /health
Health check.
Response:
GET /models
List available models.
Response:
GET /history?limit=10
Get conversation history.
Response:
Supported Actions
Action | Description | Required Params |
| Create rectangular box |
,
,
,
|
| Create cylinder |
,
,
|
| Create sphere |
,
|
| Create hole |
,
,
|
| Extrude profile |
,
,
|
| Round edges |
,
,
|
| Apply material |
|
π¬ Model Comparison
Feature | Ollama (Local) | OpenAI GPT-4o | Google Gemini | Claude 3.5 |
Cost | Free | $$ | $ | $$$ |
Speed | Fast | Medium | Fast | Medium |
Offline | β Yes | β No | β No | β No |
JSON Mode | Limited | β Native | Good | Good |
Reasoning | Good | Excellent | Very Good | Excellent |
Geometry | Good | Very Good | Excellent | Very Good |
Creative | Good | Excellent | Very Good | Good |
Best For | Privacy, Offline | Creative designs | Spatial reasoning | Safety validation |
Recommended Workflows
Creative Design: OpenAI GPT-4o β Claude (validation)
Geometric Precision: Gemini β OpenAI
Privacy-First: Ollama (all tasks)
Cost-Optimized: Gemini Flash β Ollama (fallback)
π οΈ Development
Project Structure
Running Tests
Adding New LLM Provider
Create client in
mcp_server/llm_clients/new_provider_client.py
:
Register in
router.py
:
Code Style
PEP8 compliant
Type annotations required
Docstrings for all functions/classes
Async/await for I/O operations
π Troubleshooting
Common Issues
1. Server Won't Start
Error: Address already in use
Solution: Change port in config.json
:
2. Fusion Add-in Not Visible
Solution:
Verify add-in is in correct folder
Check
FusionMCP.manifest
existsRestart Fusion 360
Check Scripts and Add-Ins β Add-Ins tab
3. API Key Errors
Error: 401 Unauthorized
Solution:
Verify API key in
config.json
Check key has proper permissions
Try environment variables instead
4. Ollama Connection Failed
Error: Connection refused
Solution:
5. JSON Parsing Errors
Solution:
Check system prompt is loaded
Verify model supports JSON mode
Use temperature < 0.8 for better structure
Enable
json_mode=True
in OpenAI client
Debug Mode
Enable verbose logging:
Check logs in logs/mcp_server.log
Health Check
π§ͺ Testing the System
Manual CLI Test
Python Test Script
π€ Contributing
Contributions welcome! Please:
Fork the repository
Create feature branch (
git checkout -b feature/amazing-feature
)Commit changes (
git commit -m 'Add amazing feature'
)Push to branch (
git push origin feature/amazing-feature
)Open Pull Request
Development Setup
π License
MIT License - see LICENSE file
π Acknowledgments
Autodesk Fusion 360 API
FastAPI framework
Anthropic, OpenAI, Google for LLM APIs
Ollama for local LLM support
π Support
Issues: GitHub Issues
Discussions: GitHub Discussions
Documentation: Wiki
πΊοΈ Roadmap
WebSocket streaming for real-time chat
Vision model support (CAD screenshot analysis)
Multi-agent orchestration
Generative Design API integration
Geometry export to Markdown/docs
Fusion 360 UI palette integration
3D preview before execution
Undo/redo action history
Cloud deployment support
Built with β€οΈ for the Fusion 360 and AI community
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
Enables AI-powered parametric CAD design in Autodesk Fusion 360 through natural language commands. Supports multiple AI backends (Ollama, OpenAI, Gemini, Claude) with intelligent routing and safety validation for geometric operations.
- π― Features
- π Table of Contents
- ποΈ Architecture
- π Installation
- π¬ Quick Start
- βοΈ Configuration
- π‘ Usage Examples
- π‘ API Reference
- π¬ Model Comparison
- π οΈ Development
- π Troubleshooting
- π§ͺ Testing the System
- π€ Contributing
- π License
- π Acknowledgments
- π Support
- πΊοΈ Roadmap