Skip to main content
Glama

Ultralytics MCP Server

Ultralytics MCP Server 🚀

A powerful Model Context Protocol (MCP) compliant server that provides RESTful API access to Ultralytics YOLO operations for computer vision tasks including training, validation, prediction, export, tracking, and benchmarking.

🎯 What is this?

The Ultralytics MCP Server transforms Ultralytics' command-line YOLO operations into a production-ready REST API service. Whether you're building computer vision applications, training custom models, or integrating YOLO into your workflow automation tools like n8n, this server provides a seamless bridge between Ultralytics' powerful capabilities and modern application architectures.

✨ Key Features

  • 🌐 RESTful API: HTTP endpoints for all YOLO operations with comprehensive request/response validation
  • 📡 Real-time Updates: Server-Sent Events (SSE) for monitoring long-running operations like training
  • 🤝 MCP Compliance: Full Model Context Protocol support with handshake endpoint and tool discovery for workflow automation
  • 🐳 Production Ready: Docker containerization with multi-stage builds and security scanning
  • 🧪 Battle Tested: Comprehensive test suite with CI/CD pipeline and 90%+ code coverage
  • 📊 Observability: Built-in metrics parsing, health checks, and monitoring endpoints
  • 🔒 Enterprise Security: API key authentication, input validation, and vulnerability scanning
  • ⚡ CPU & GPU Support: Automatic device detection with graceful fallbacks
  • 📚 Self-Documenting: Auto-generated OpenAPI/Swagger documentation

🏗️ Architecture Overview

Core Components:

  • app/main.py: FastAPI application with route definitions and middleware
  • app/schemas.py: Pydantic models for comprehensive request/response validation
  • app/ultra.py: Ultralytics CLI integration with metrics parsing and device management
  • tools/UltralyticsMCPTool: TypeScript MCP client library for workflow automation

🚀 Quick Start

📋 Prerequisites

  • Python 3.11+ (required for compatibility)
  • Conda/Miniconda (recommended for environment management)
  • Git (for cloning the repository)
  • 4GB+ RAM (for model operations)
  • Optional: NVIDIA GPU with CUDA support for faster training

⚡ One-Minute Setup

# 1. Clone and enter directory git clone https://github.com/MetehanYasar11/ultralytics_mcp_server.git cd ultralytics_mcp_server # 2. Create environment and install dependencies conda env create -f environment.yml conda activate ultra-dev # 3. Start the server uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload # 4. Test it works (in another terminal) curl http://localhost:8000/

🔍 Verify Installation

After setup, verify everything works:

# Check health endpoint curl http://localhost:8000/ # Expected: {"status":"healthy","message":"Ultralytics API is running",...} # View interactive API documentation open http://localhost:8000/docs # Test a simple prediction curl -X POST "http://localhost:8000/predict" \ -H "Content-Type: application/json" \ -d '{ "model": "yolov8n.pt", "source": "https://ultralytics.com/images/bus.jpg", "conf": 0.5, "save": true }'

📖 What Just Happened?

  1. Environment Setup: Created isolated conda environment with PyTorch CPU support
  2. Dependency Installation: Installed Ultralytics, FastAPI, and all required packages
  3. Server Start: Launched FastAPI server with auto-reload for development
  4. API Test: Made a prediction request using a pre-trained YOLOv8 nano model

🟢 Real-time SSE with n8n

SSE Live Logs

Live streaming updates for YOLO operations with Server-Sent Events (SSE)

Using SSE in n8n

  1. Drag MCP Client Tool ➜ set SSE Endpoint http://host.docker.internal:8092/sse/train (or compose DNS ultra-api:8000/sse/train).
  2. In Settings ➜ OpenAPI URL use http://host.docker.internal:8092/openapi.json.
  3. Set Timeout 0 to keep stream open.
  4. Run workflow → live epoch/loss lines appear in the node's execution log.

🎯 Available SSE Endpoints:

  • /sse - MCP handshake endpoint with tool discovery and keep-alive
  • /sse/train - Real-time training progress with epoch updates
  • /sse/predict - Live prediction results
  • /sse/val - Validation metrics streaming
  • /sse/export - Export progress updates
  • /sse/track - Object tracking stream
  • /sse/benchmark - Performance testing results
  • /sse/solution - Solution execution logs

📊 SSE Examples:

MCP Handshake:

curl -N "http://localhost:8092/sse" # Output: # data: {"tools": ["train", "val", "predict", "export", "track", "benchmark"], "info": "Ultralytics MCP ready"} # : ping # : ping # (continues with keep-alive pings every 15s)

Training with Live Progress:

curl -N "http://localhost:8092/sse/train?data=coco128.yaml&epochs=1&device=cpu" # Output: # data: Ultralytics YOLOv8.0.196 🚀 Python-3.11.5 torch-2.1.1 # data: # data: train: Scanning /datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 # data: val: Scanning /datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 # data: # data: Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size # data: 1/1 0.12G 1.325 2.009 1.268 89 640: 100%|██████████| 8/8 # data: [COMPLETED] Process finished successfully

🔗 OpenAPI Documentation: http://localhost:8092/docs#/default/sse_endpoint_sse__op__get

🧪 Running Tests

Our comprehensive test suite ensures reliability across all operations.

🏃‍♂️ Quick Test

# Run all tests (recommended) python run_tests.py # View test progress with details pytest tests/test_flow.py -v -s # Run only fast tests (skip training) python run_tests.py quick

🔬 Test Categories

Test TypeCommandDurationWhat it Tests
Unit Testspytest tests/test_unit.py~10sIndividual functions
Integrationpytest tests/test_flow.py~5minComplete workflows
Quick Checkpython run_tests.py quick~30sEndpoints only
Full Suitepython run_tests.py~5minEverything including training

📊 Understanding Test Output

tests/test_flow.py::TestUltralyticsFlow::test_health_check ✅ PASSED tests/test_flow.py::TestUltralyticsFlow::test_01_train_model ✅ PASSED tests/test_flow.py::TestUltralyticsFlow::test_02_validate_model ✅ PASSED tests/test_flow.py::TestUltralyticsFlow::test_03_predict_with_model ✅ PASSED # ... more tests ======================== 9 passed in 295.15s ========================

The integration test performs a complete YOLO workflow:

  1. 🏥 Health Check - Verify API is responsive
  2. 🏋️ Model Training - Train YOLOv8n for 1 epoch on COCO128
  3. 🔍 Model Validation - Validate the trained model
  4. 🎯 Prediction - Run inference on a test image
  5. � File Verification - Check all expected outputs were created

CI/CD Workflow

The project uses GitHub Actions for continuous integration and deployment. See .github/workflows/ci.yml for the complete configuration.

Workflow Jobs

  1. 🧪 Test Job
    • Sets up Conda environment with caching
    • Runs pytest with coverage reporting
    • Uploads coverage to Codecov
  2. 🐳 Build Job (on success)
    • Builds Docker image with multi-stage optimization
    • Pushes to GitHub Container Registry
    • Supports multi-platform builds (amd64, arm64)
  3. 🔒 Security Job
    • Runs Trivy vulnerability scanner
    • Uploads SARIF results to GitHub Security
  4. 🔗 Integration Job
    • Tests complete API workflow
    • Validates endpoint responses
    • Checks health and documentation endpoints

Workflow Triggers

  • Push to main or develop branches
  • Pull Requests to main branch
  • Manual workflow dispatch

Caching Strategy

# Conda packages cached by environment.yml hash key: conda-${{ runner.os }}-${{ hashFiles('environment.yml') }} # Docker layers cached using GitHub Actions cache cache-from: type=gha cache-to: type=gha,mode=max

Docker Deployment

Quick Deploy

# Using Docker Compose (recommended) docker-compose up -d # Check service status docker-compose ps # View logs docker-compose logs -f ultra-api

Production Deployment

# Production configuration docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d # With monitoring stack docker-compose -f docker-compose.yml -f docker-compose.prod.yml --profile monitoring up -d

Environment Configuration

# Copy environment template cp .env.example .env # Edit configuration nano .env

Key Variables:

  • ULTRA_API_KEY: API authentication key
  • CUDA_VISIBLE_DEVICES: GPU selection
  • MEMORY_LIMIT: Container memory limit

Service Access

Once deployed, access the service at:

  • API: http://localhost:8000
  • Docs: http://localhost:8000/docs
  • Prometheus (if enabled): http://localhost:9090
  • Grafana (if enabled): http://localhost:3000

For detailed Docker configuration, see DOCKER.md.

📚 API Reference & Examples

🎯 Core Operations

OperationEndpointPurposeExample Use Case
TrainPOST /trainTrain custom modelsTraining on your dataset
ValidatePOST /valModel performance testingCheck accuracy metrics
PredictPOST /predictObject detection/classificationReal-time inference
ExportPOST /exportFormat conversionDeploy to mobile/edge
TrackPOST /trackObject tracking in videosSurveillance, sports analysis
BenchmarkPOST /benchmarkPerformance testingHardware optimization

📝 Request/Response Format

All endpoints return a standardized response structure:

{ "run_id": "abc123-def456-ghi789", "command": "yolo train model=yolov8n.pt data=coco128.yaml epochs=10", "return_code": 0, "stdout": "Training completed successfully...", "stderr": "", "metrics": { "mAP50": 0.95, "mAP50-95": 0.73, "precision": 0.89, "recall": 0.84, "training_time": 1200.5 }, "artifacts": [ "runs/train/exp/weights/best.pt", "runs/train/exp/weights/last.pt", "runs/train/exp/results.csv" ], "success": true, "timestamp": "2025-07-12T10:30:00Z" }

🚀 Example Operations

1. Training a Custom Model
curl -X POST "http://localhost:8000/train" \ -H "Content-Type: application/json" \ -d '{ "model": "yolov8n.pt", "data": "coco128.yaml", "epochs": 50, "imgsz": 640, "batch": 16, "device": "0", "extra_args": { "patience": 10, "save_period": 5, "cos_lr": true } }'
2. Real-time Prediction
curl -X POST "http://localhost:8000/predict" \ -H "Content-Type: application/json" \ -d '{ "model": "yolov8n.pt", "source": "path/to/image.jpg", "conf": 0.25, "iou": 0.7, "save": true, "save_txt": true, "save_conf": true }'
3. Model Export for Deployment
curl -X POST "http://localhost:8000/export" \ -H "Content-Type: application/json" \ -d '{ "model": "runs/train/exp/weights/best.pt", "format": "onnx", "dynamic": true, "simplify": true, "opset": 11 }'
4. Video Object Tracking
curl -X POST "http://localhost:8000/track" \ -H "Content-Type: application/json" \ -d '{ "model": "yolov8n.pt", "source": "path/to/video.mp4", "tracker": "bytetrack.yaml", "conf": 0.3, "save": true }'

📊 Common Parameters Reference

ParameterTypeDefaultDescriptionExample
modelstringrequiredModel path or name"yolov8n.pt"
datastring-Dataset YAML path"coco128.yaml"
sourcestring-Input source"image.jpg", "video.mp4", "0" (webcam)
epochsinteger100Training epochs50
imgszinteger640Image size320, 640, 1280
devicestring"cpu"Compute device"cpu", "0", "0,1"
conffloat0.25Confidence threshold0.1 to 1.0
ioufloat0.7IoU threshold for NMS0.1 to 1.0
batchinteger16Batch size1, 8, 32
savebooleanfalseSave resultstrue, false
extra_argsobject{}Additional YOLO args{"patience": 10}

🧪 Testing & Quality Assurance

🔬 Comprehensive Test Suite

Our testing infrastructure ensures reliability across all YOLO operations:

# Run all tests with conda environment conda activate ultra-dev pytest tests/ -v # Run specific test categories pytest tests/test_flow.py -v # Core workflow tests pytest tests/test_mcp_train.py -v # Training specific tests pytest tests/test_mcp_predict.py -v # Prediction tests pytest tests/test_mcp_export.py -v # Export functionality tests # Generate coverage report pytest tests/ --cov=app --cov-report=html

📊 Test Coverage

ComponentTestsCoverageDescription
Core Flow9 tests95%+Complete train→validate→predict workflow
Training5 tests98%Model training with various configurations
Prediction4 tests97%Inference on images, videos, webcam
Export3 tests95%Model format conversion (ONNX, TensorRT)
Tracking3 tests92%Object tracking in video streams
Benchmark2 tests90%Performance testing and profiling

🚦 CI/CD Pipeline

# Automated testing on every commit Workflow: Test Suite ├── Environment Setup (Conda + PyTorch CPU) ├── Dependency Installation ├── Linting & Code Quality (flake8, black) ├── Unit Tests (pytest) ├── Integration Tests ├── Security Scanning (bandit) ├── Docker Build & Test └── Documentation Validation

🔍 Example Test Run

$ pytest tests/test_flow.py::test_complete_workflow -v tests/test_flow.py::test_complete_workflow PASSED [100%] ======================== Test Results ======================== ✅ Train: Model trained successfully (epochs: 2) ✅ Validate: mAP50 = 0.847, mAP50-95 = 0.621 ✅ Predict: 3 objects detected with confidence > 0.5 ✅ Export: ONNX model exported (size: 12.4MB) ✅ Cleanup: Temporary files removed Duration: 45.2s | Memory: 2.1GB | CPU: Intel i7 =================== 1 passed in 45.23s ===================

See tests/README.md for detailed test documentation.

🚀 n8n Integration

n8n MCP Client Setup

SSE Endpoint : http://host.docker.internal:8092/sse OpenAPI URL : http://host.docker.internal:8092/openapi.json Manifest URL : http://host.docker.internal:8092/mcp/manifest.json Tools : train · val · predict · export · track · benchmark Timeout : 0

🤝 MCP Handshake Protocol

The /sse endpoint now serves as a Model Context Protocol (MCP) handshake endpoint:

  1. Initial Connection: When you connect to /sse, it immediately sends a tool discovery message:
    data: {"tools": ["train", "val", "predict", "export", "track", "benchmark"], "info": "Ultralytics MCP ready"}
  2. Keep-Alive: After the handshake, it sends ping comments every 15 seconds to maintain the connection:
    : ping
  3. Tool Discovery: MCP clients can discover available tools via:
    • Manifest Endpoint: GET /mcp/manifest.json - Static tool definitions
    • SSE Handshake: GET /sse - Dynamic tool discovery with live connection

Streaming in n8n

  1. Drag MCP Client ToolSSE Endpoint http://host.docker.internal:8092/sse/train (or /sse/predict).
  2. OpenAPI URL http://host.docker.internal:8092/openapi.json.
  3. Timeout 0, Auth None.
  4. Run workflow → live epoch/loss lines appear in execution log (see GIF).

Available SSE Endpoints:

  • /sse - MCP handshake endpoint with tool discovery and keep-alive
  • /sse/train - Real-time training progress with epoch updates
  • /sse/predict - Live prediction results
  • /sse/val - Validation metrics streaming
  • /sse/export - Export progress updates
  • /sse/track - Object tracking stream
  • /sse/benchmark - Performance testing results
  • /sse/solution - Solution execution logs

Example SSE Training Stream:

curl -N "http://localhost:8092/sse/train?data=coco128.yaml&epochs=1&device=cpu" # Output: # data: Ultralytics YOLOv8.0.196 🚀 Python-3.11.5 torch-2.1.1 # data: Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size # data: 1/1 0.12G 1.325 2.009 1.268 89 640: 100%|██████████| 8/8 # data: [COMPLETED] Process finished successfully

For detailed integration examples, see tools/UltralyticsMCPTool/README.md.

1. Environment Setup

Add the Ultralytics API URL to your n8n environment:

# In your n8n environment export ULTRA_API_URL=http://localhost:8000 # Or in Docker Compose environment: - ULTRA_API_URL=http://ultralytics-api:8000

2. Install UltralyticsMCPTool

# Navigate to the tool directory cd tools/UltralyticsMCPTool # Install dependencies npm install # Build the tool npm run build # Link for global usage npm link

3. n8n Node Configuration

Create a custom n8n node or use the HTTP Request node:

// n8n Custom Node Example import UltralyticsMCPTool from 'ultralytics-mcp-tool'; const tool = new UltralyticsMCPTool(process.env.ULTRA_API_URL); // Train a model const result = await tool.train({ model: 'yolov8n.pt',\n data: 'coco128.yaml',\n epochs: 10 });

4. Workflow Examples

Image Classification Workflow:

  1. Trigger: Webhook receives image
  2. Ultralytics: Predict objects
  3. Logic: Process results
  4. Output: Send notifications

Training Pipeline:

  1. Schedule: Daily trigger
  2. Ultralytics: Train model
  3. Validate: Check performance
  4. Deploy: Update production model

5. MCP Integration

// Get available tools const manifest = UltralyticsMCPTool.manifest(); console.log('Available operations:', manifest.tools.map(t => t.name)); // Execute with different channels const httpResult = await tool.execute('predict', params, 'http'); const stdioResult = await tool.execute('predict', params, 'stdio'); // Real-time updates with SSE tool.trainSSE(params, { onProgress: (data) => updateWorkflowStatus(data), onComplete: (result) => triggerNextNode(result) });

For detailed integration examples, see tools/UltralyticsMCPTool/README.md.

🐳 Docker Deployment

🚀 Quick Docker Setup

# Clone and build git clone https://github.com/your-username/ultralytics-mcp-server.git cd ultralytics-mcp-server # Build and run with Docker Compose docker-compose up -d # Verify deployment curl http://localhost:8000/docs

📁 Docker Configuration

Production-ready setup:

# Dockerfile highlights FROM python:3.11-slim WORKDIR /app # Install dependencies COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy application COPY app/ ./app/ COPY models/ ./models/ # Expose port and run EXPOSE 8000 CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Docker Compose services:

# docker-compose.yml version: '3.8' services: ultralytics-api: build: . ports: - "8000:8000" volumes: - ./models:/app/models - ./data:/app/data - ./runs:/app/runs environment: - YOLO_CACHE_DIR=/app/cache - YOLO_SETTINGS_DIR=/app/settings restart: unless-stopped nginx: image: nginx:alpine ports: - "80:80" - "443:443" volumes: - ./nginx.conf:/etc/nginx/nginx.conf - ./ssl:/etc/nginx/ssl depends_on: - ultralytics-api

🔧 Environment Variables

VariableDefaultDescription
YOLO_CACHE_DIR/tmp/yoloModel cache directory
YOLO_SETTINGS_DIR/tmp/settingsSettings directory
API_HOST0.0.0.0API host binding
API_PORT8000API port
LOG_LEVELINFOLogging level
MAX_WORKERS4Uvicorn workers
MODEL_DIR/app/modelsModel storage path

🌐 Production Deployment

# Production deployment with SSL docker-compose -f docker-compose.prod.yml up -d # Health check curl -f http://localhost:8000/health || exit 1 # Scale services docker-compose up -d --scale ultralytics-api=3 # Monitor logs docker-compose logs -f ultralytics-api

🔧 API Documentation

Response Format

All endpoints return a standardized response:

{ "run_id": "uuid-string", "command": "yolo train model=yolov8n.pt...", "return_code": 0, "stdout": "command output", "stderr": "error output", "metrics": { "mAP50": 0.95, "precision": 0.89, "training_time": 1200 }, "artifacts": [ "runs/train/exp/weights/best.pt", "runs/train/exp/results.csv" ], "success": true, "timestamp": "2024-01-01T12:00:00Z" }

Error Handling

{ "error": "Validation Error", "details": "Model file not found: invalid_model.pt", "timestamp": "2024-01-01T12:00:00Z" }

🛡️ Security & Authentication

# API Key authentication curl -H "X-API-Key: your-api-key-here" \ -X POST "http://localhost:8000/predict" \ -d '{"model": "yolov8n.pt", "source": "image.jpg"}' # JWT Token authentication curl -H "Authorization: Bearer your-jwt-token" \ -X POST "http://localhost:8000/train" \ -d '{"model": "yolov8n.pt", "data": "dataset.yaml"}'

📊 Health & Monitoring

# Health check endpoint curl http://localhost:8000/health # Response: {"status": "healthy", "version": "1.0.0", "uptime": 3600} # Metrics endpoint curl http://localhost:8000/metrics # Response: Prometheus-formatted metrics # Status endpoint with system info curl http://localhost:8000/status # Response: {"gpu": "available", "memory": "8GB", "models_loaded": 3}

🤝 Contributing Guidelines

We welcome contributions! Please follow these guidelines:

Development Setup

  1. Fork and Clone
    git clone https://github.com/your-username/ultralytics-mcp-server.git cd ultralytics-mcp-server
  2. Create Environment
    conda env create -f environment.yml conda activate ultra-dev
  3. Install Development Tools
    pip install black isort flake8 mypy pytest-cov

Code Standards

  • Python: Follow PEP 8, use Black for formatting
  • TypeScript: Use ESLint and Prettier
  • Documentation: Update README.md and docstrings
  • Tests: Maintain 80%+ test coverage

Pre-commit Checks

# Format code black app/ tests/ isort app/ tests/ # Lint code flake8 app/ tests/ mypy app/ # Run tests pytest --cov=app

Pull Request Process

  1. Create Feature Branch
    git checkout -b feature/your-feature-name
  2. Make Changes
    • Write code following standards
    • Add/update tests
    • Update documentation
  3. Test Changes
    pytest -v python run_tests.py
  4. Submit PR
    • Clear description of changes
    • Reference related issues
    • Ensure CI passes

Issue Reporting

When reporting issues, include:

  • Environment: OS, Python version, dependencies
  • Steps: Minimal reproduction steps
  • Expected: What should happen
  • Actual: What actually happens
  • Logs: Error messages and stack traces

Feature Requests

For new features:

  • Use Case: Why is this needed?
  • Proposal: How should it work?
  • Impact: Who benefits from this?
  • Implementation: Any technical considerations?

📄 License & Support

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

Key permissions:

  • ✅ Commercial use
  • ✅ Modification
  • ✅ Distribution
  • ✅ Private use

🆘 Getting Help

ResourceLinkPurpose
📚 API Docshttp://localhost:8000/docsInteractive API documentation
🐛 IssuesGitHub IssuesBug reports & feature requests
💬 DiscussionsGitHub DiscussionsQuestions & community chat
📖 UltralyticsOfficial DocsYOLO model documentation
🔧 MCP ProtocolSpecificationMCP standard reference

🎯 Quick Support Checklist

Before asking for help:

  1. Check the FAQ for common issues
  2. Search existing GitHub Issues
  3. Test with the latest version
  4. Include environment details in your issue

When reporting bugs:

# Include this information OS: Windows 11 / macOS 14 / Ubuntu 22.04 Python: 3.11.x Conda env: ultra-dev PyTorch: 2.5.1+cpu Error: [paste complete error message]

🙏 Acknowledgments

ComponentThanks ToFor
🎯 YOLO ModelsUltralyticsRevolutionary object detection
🚀 FastAPISebastian RamirezLightning-fast API framework
🔧 PydanticSamuel ColvinData validation & settings
🐳 DockerDocker IncContainerization platform
🧪 pytestpytest-devTesting framework
🌐 CondaAnacondaPackage management

🌟 Built with ❤️ for the Computer Vision Community 🌟

⭐ Star this repo | 🍴 Fork & contribute | 📢 Share with friends

Empowering developers to build intelligent computer vision applications with ease 🚀

-
security - not tested
F
license - not found
-
quality - not tested

A Model Context Protocol compliant server that provides RESTful API access to Ultralytics YOLO operations for computer vision tasks including training, validation, prediction, export, tracking, and benchmarking.

  1. 🎯 What is this?
    1. ✨ Key Features
      1. 🏗️ Architecture Overview
        1. 🚀 Quick Start
          1. 📋 Prerequisites
          2. ⚡ One-Minute Setup
          3. 🔍 Verify Installation
          4. 📖 What Just Happened?
        2. 🟢 Real-time SSE with n8n
          1. Using SSE in n8n
        3. 🧪 Running Tests
          1. 🏃‍♂️ Quick Test
          2. 🔬 Test Categories
          3. 📊 Understanding Test Output
        4. CI/CD Workflow
          1. Workflow Jobs
          2. Workflow Triggers
          3. Caching Strategy
        5. Docker Deployment
          1. Quick Deploy
          2. Production Deployment
          3. Environment Configuration
          4. Service Access
        6. 📚 API Reference & Examples
          1. 🎯 Core Operations
          2. 📝 Request/Response Format
          3. 🚀 Example Operations
          4. 📊 Common Parameters Reference
        7. 🧪 Testing & Quality Assurance
          1. 🔬 Comprehensive Test Suite
          2. 📊 Test Coverage
          3. 🚦 CI/CD Pipeline
          4. 🔍 Example Test Run
        8. 🚀 n8n Integration
          1. n8n MCP Client Setup
          2. 🤝 MCP Handshake Protocol
          3. Streaming in n8n
          4. 1. Environment Setup
          5. 2. Install UltralyticsMCPTool
          6. 3. n8n Node Configuration
          7. 4. Workflow Examples
          8. 5. MCP Integration
        9. 🐳 Docker Deployment
          1. 🚀 Quick Docker Setup
          2. 📁 Docker Configuration
          3. 🔧 Environment Variables
          4. 🌐 Production Deployment
        10. 🔧 API Documentation
          1. Response Format
          2. Error Handling
          3. 🛡️ Security & Authentication
          4. 📊 Health & Monitoring
        11. 🤝 Contributing Guidelines
          1. Development Setup
          2. Code Standards
          3. Pre-commit Checks
          4. Pull Request Process
          5. Issue Reporting
          6. Feature Requests
        12. 📄 License & Support
          1. 📝 License
          2. 🆘 Getting Help
          3. 🎯 Quick Support Checklist
          4. 🙏 Acknowledgments
          5. 🌟 Built with ❤️ for the Computer Vision Community 🌟

        Related MCP Servers

        • -
          security
          A
          license
          -
          quality
          A Model Context Protocol server that provides access to Unity Catalog Functions, allowing AI assistants to list, get, create, and delete functions within Unity Catalog directly through a standardized interface.
          Last updated -
          14
          Python
          MIT License
        • -
          security
          F
          license
          -
          quality
          A Model Context Protocol server that enables natural language interactive control of Universal Robots collaborative robots, allowing users to control robot motion, monitor status, and execute programs through direct commands to large language models.
          Last updated -
          3
          Python
          • Linux
          • Apple
        • -
          security
          F
          license
          -
          quality
          A Model Context Protocol server that provides a standardized interface for AI models and applications to interact with the Luno cryptocurrency exchange API for trading operations.
          Last updated -
          2
          Python
          • Apple
          • Linux
        • A
          security
          F
          license
          A
          quality
          A Model Context Protocol server that enables LLMs to explore and interact with API specifications by providing tools for loading, browsing, and getting detailed information about API endpoints.
          Last updated -
          4
          7
          13
          TypeScript

        View all related MCP servers

        MCP directory API

        We provide all the information about MCP servers via our MCP API.

        curl -X GET 'https://glama.ai/api/mcp/v1/servers/MetehanYasar11/ultralytics_mcp_server'

        If you have feedback or need assistance with the MCP directory API, please join our Discord server