Supports local LLM backends through Ollama for AI-enhanced review, documentation, and workflow operations without requiring cloud-based API access.
Supports LLM-powered workflows including AI-enhanced spec review, documentation generation, and PR creation using OpenAI's API as one of the configurable provider options.
Distributed as a Python package through PyPI, enabling installation via pip with semantic versioning aligned to the spec roadmap.
Provides testing tools that run pytest presets (quick, unit, full) with structured output and integration with the regression testing harness.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Foundry MCPshow me the active specs and their blockers"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
foundry-mcp
Turn AI coding assistants into reliable software engineers with structured specs, progress tracking, and automated review.
Table of Contents
Why foundry-mcp?
The problem: AI coding assistants are powerful but unreliable on complex tasks. They lose context mid-feature, skip steps without warning, and deliver inconsistent results across sessions.
The solution: foundry-mcp provides the scaffolding to break work into specs, track progress, and verify outputs—so your AI assistant delivers like a professional engineer.
No more lost context — Specs persist state across sessions so the AI picks up where it left off.
No more skipped steps — Task dependencies and blockers ensure nothing gets missed.
No more guessing progress — See exactly what's done, what's blocked, and what's next.
No more manual review — AI review validates implementation against spec requirements.
Key Features
Specs keep AI on track — Break complex work into phases and tasks the AI can complete without losing context.
Progress you can see — Track what's done, what's blocked, and what's next across multi-session work.
AI-powered review — LLM integration reviews specs, generates PR descriptions, and validates implementation.
Works with your tools — Runs as MCP server (Claude Code, Gemini CLI) or standalone CLI with JSON output.
Security built in — Workspace scoping, API key auth, rate limits, and audit logging ship by default.
Discovery-first — Capabilities declared in a manifest so clients negotiate features automatically.
Installation
Prerequisites
Python 3.10 or higher
macOS, Linux, or Windows
MCP-compatible client (e.g., Claude Code)
Install with uvx (recommended)
Install with pip
Install from source (development)
Quick Start
1. Install the claude-foundry plugin (from within Claude Code):
Restart Claude Code and trust the repository when prompted.
Note: The plugin automatically registers the MCP server using
uvx— no separate installation needed.
2. Run setup:
3. Start building:
Claude creates a spec with phases, tasks, and verification steps. Ask to implement and it works through tasks in dependency order.
How It Works
foundry-mcp is the MCP server that provides the underlying tools and APIs. The claude-foundry plugin provides the user-facing skills that orchestrate workflows.
Component | Role |
foundry-mcp | MCP server + CLI providing spec/task/review tools |
claude-foundry | Claude Code plugin providing skills and workflow |
For most users, install both and interact through natural language. The plugin handles tool orchestration automatically.
Configuration
API Keys
foundry-mcp uses LLM providers for AI-powered features like spec review, consensus, and deep research. Set the API keys for providers you want to use:
TOML Configuration (Optional)
Configuration loads in layers (later layers override earlier):
User config (
~/.foundry-mcp.toml) - User-wide defaults for API keys, preferred providersProject config (
./foundry-mcp.toml) - Project-specific settings
Advanced Usage
Direct MCP Configuration (without plugin)
For MCP clients other than Claude Code, or if you prefer manual configuration:
CLI Usage
All MCP tools are also available via CLI with JSON output:
Launch as Standalone MCP Server
The server advertises its capabilities, feature flags, and response contract so MCP clients (Claude Code, Gemini CLI, etc.) can connect automatically.
Documentation
User guides
Guide | Description |
Get up and running in 5 minutes | |
Understand specs, phases, and tasks | |
End-to-end development workflows | |
Complete CLI command documentation | |
All MCP tools and their parameters | |
Environment variables and TOML setup | |
Common issues and solutions |
Concepts
Guide | Description |
Why spec-driven development matters | |
Standardized response format | |
Spec file structure and fields | |
Provider setup and fallbacks |
Developer docs
Guide | Description |
Entry point for developer documentation | |
Canonical implementation checklist | |
Standardized envelope reference | |
JSON-first CLI expectations |
Scope and Limitations
Best for:
Multi-step feature development with AI assistants
Teams wanting structured handoff between AI and human reviewers
Projects requiring audit trails and progress visibility
Not suited for:
Quick one-off code changes (use your AI assistant directly)
Non-software tasks (specs are code-focused)
Fully autonomous AI agents (foundry assumes human oversight)
Testing
Regression tests keep MCP/CLI adapters aligned across surfaces.
Golden fixtures (
tests/fixtures/golden) ensure response envelopes, error semantics, and pagination never regress.Freshness checks run alongside core unit and integration suites.
Contributing
Contributions are welcome! Please read the MCP Best Practices before submitting PRs. All changes should keep specs, docs, code, and fixtures in sync.
License
MIT License — see LICENSE for details.
Built by · Report an Issue · View on GitHub