Exposes a Prometheus-compatible /metrics endpoint to enable monitoring and tracking of server performance and tool invocation metrics.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@QMCPlist all available tools and recent invocation history"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
QMCP - Model Context Protocol Server
A spec-aligned Model Context Protocol (MCP) server built with FastAPI.
Features
✅ Tool Discovery - List available tools via
/v1/tools✅ Tool Invocation - Execute tools via
/v1/tools/{name}✅ Invocation History - Audit trail via
/v1/invocations✅ Human-in-the-Loop - Request human input via
/v1/human/*✅ Persistence - SQLite with SQLModel/aiosqlite
✅ Python Client -
qmcp.client.MCPClientfor workflows✅ Metaflow Examples - Ready-to-use flow templates
✅ Agent Framework - SQLModel schemas + mixins for agent types/topologies
✅ PydanticAI Integration - Create agents from QMCP models with full audit trail
✅ Structured Logging - JSON logs with structlog
✅ Request Tracing - Correlation IDs across requests
✅ Metrics - Prometheus-compatible
/metricsendpoint✅ CLI Interface - Manage via
qmcpcommand
Quick Start
See quickstart.md for a copy-paste walkthrough.
Adoption and Onboarding
Adoption checklist:
Decide how the server is hosted (local, container, or VM) and who can reach it.
Set
QMCP_HOST,QMCP_PORT, andQMCP_DATABASE_URLfor your environment.Standardize
X-Correlation-IDvalues for audit trails across clients.Decide how humans submit HITL responses (UI or API).
Wire
/metricsinto your monitoring stack.
Onboarding path:
uv sync --all-extrasRun the end-to-end tutorial below.
uv run qmcp servefor local exploration.
End-to-End Tutorial (HITL approval workflow)
This tutorial mirrors the end-to-end test
tests/test_hitl.py::TestHITLWorkflow::test_complete_approval_workflow.
Copy and paste:
Client Library
See docs/client.md for full API documentation.
CLI Commands
Cookbook flows run in Docker and require Docker Desktop (Linux engine).
Add --no-sync to skip syncing flow dependencies if the image is already built.
API Endpoints
Endpoint | Method | Description |
| GET | Health check |
| GET | List available tools |
| POST | Invoke a tool |
| GET | List invocation history |
| GET | Get single invocation |
| POST | Create human request |
| GET | List human requests |
| GET | Get request with response |
| POST | Submit human response |
| GET | Prometheus metrics |
| GET | Metrics as JSON |
Built-in Tools
echo - Echo input back (for testing)
planner - Create execution plans
executor - Execute approved plans
reviewer - Review and assess results
Development
Architecture
See docs/architecture.md for the full architectural overview.
The system follows a three-plane architecture:
Client/Orchestration - Metaflow workflows (MCP client)
MCP Server - FastAPI service (this project)
Execution/Storage - Tools and database
Documentation
Quickstart - Copy-paste setup and validation
Overview - What and why
Architecture - How and constraints
Tools - Tool capabilities
Client Library - Python client API
Human-in-the-Loop - HITL guide
Agent Framework - Agent schemas and mixins
PydanticAI Integration - Agent runtime integration
Deployment - Production deployment guide
Contributing - Development guidelines
Roadmap - Development phases
Example Flows
See examples/flows/ for Metaflow integration examples:
simple_plan.py - Basic tool invocation
approved_deploy.py - HITL approval workflow
local_agent_chain.py - Local LLM plan -> review -> refine with SQLModel artifacts
local_qc_gauntlet.py - Local LLM QC checklist/task/gate builder
local_release_notes.py - Local LLM release notes and doc update suggestions
For local LLM flows, install extras with uv sync --extra flows.
Start uv run qmcp serve --host 0.0.0.0 when --use-mcp True to enable MCP calls
from Docker-based flows.
On Windows, prefer running flows in a Linux container to avoid platform-specific
Metaflow dependencies.
Docker runner (recommended on Windows):
Set MCP_URL and LLM_BASE_URL (or pass --mcp-url / --llm-base-url) when
running in Docker, e.g. http://host.docker.internal:3333.
License
MIT