README.mdโข10.3 kB
<div align="center">
<h1>๐ง ACE MCP Server</h1>
<p>
<strong>Self-Improving Context for Your AI Coding Assistant</strong><br/>
<i>Reduce tokens by 86.9%, improve accuracy by 10.6%</i>
</p>
<p>
<a href="https://www.npmjs.com/package/ace-mcp-server">
<img src="https://img.shields.io/npm/v/ace-mcp-server?style=for-the-badge&color=blue" alt="npm version"/>
</a>
<a href="https://github.com/Angry-Robot-Deals/ace-mcp/blob/main/LICENSE">
<img src="https://img.shields.io/badge/license-MIT-purple?style=for-the-badge" alt="License"/>
</a>
<a href="https://arxiv.org/pdf/2510.04618">
<img src="https://img.shields.io/badge/Research-Stanford-red?style=for-the-badge" alt="Research Paper"/>
</a>
</p>
<p>
<a href="#-quick-start">Quick Start</a> โข
<a href="#-features">Features</a> โข
<a href="#-how-it-works">How It Works</a> โข
<a href="#-documentation">Documentation</a> โข
<a href="#-deployment">Deployment</a>
</p>
</div>
---
## ๐ Overview
**ACE MCP Server** implements the Agentic Context Engineering framework as a Model Context Protocol (MCP) server for Cursor AI. Your AI assistant learns from its own execution feedback, building a self-improving knowledge base that gets better with every task.
Based on [research](https://arxiv.org/pdf/2510.04618) from Stanford University & SambaNova Systems (October 2025).
<div align="center">
<table>
<tr>
<td align="center">๐ <strong>86.9% Token Reduction</strong><br/><sub>Incremental updates vs full rewrites</sub></td>
<td align="center">๐ <strong>+10.6% Accuracy</strong><br/><sub>Self-learning from feedback</sub></td>
<td align="center">๐ <strong>Continuous Improvement</strong><br/><sub>Gets better with each use</sub></td>
<td align="center">๐ฏ <strong>Context Isolation</strong><br/><sub>Separate playbooks per domain</sub></td>
</tr>
</table>
</div>
---
## ๐ฏ Why ACE?
Traditional AI assistants forget everything between conversations. ACE remembers what works and what doesn't, creating a **playbook** of proven strategies that grows with your team's experience.
### The Problem
- ๐ธ High token costs from sending full context every time
- ๐ Same mistakes repeated across conversations
- ๐ No learning from past successes/failures
- ๐คท Generic responses that don't fit your codebase
### The Solution
- โ
Incremental delta updates (send only changes)
- โ
Self-learning from execution feedback
- โ
Semantic deduplication (no redundant knowledge)
- โ
Context-aware strategies per domain
---
## โก Quick Start
### Prerequisites
- Node.js 18+
- Cursor AI or MCP-compatible client
- OpenAI API key OR local LM Studio server
### Installation
```bash
# Clone repository
git clone https://github.com/Angry-Robot-Deals/ace-mcp.git
cd ace-mcp
# Install dependencies
npm install
# Configure environment
cp .env.example .env
# Edit .env with your LLM provider settings
# Build
npm run build
# Start server
npm start
```
### Cursor AI Configuration
Add to `~/.cursor/mcp.json`:
```json
{
"mcpServers": {
"ace-context-engine": {
"command": "node",
"args": ["/absolute/path/to/ace-mcp-server/dist/index.js"],
"env": {
"LLM_PROVIDER": "openai",
"OPENAI_API_KEY": "sk-your-api-key-here",
"ACE_CONTEXT_DIR": "./contexts",
"ACE_LOG_LEVEL": "info"
}
}
}
}
```
### Using Local LM Studio
```json
{
"mcpServers": {
"ace-context-engine": {
"command": "node",
"args": ["/absolute/path/to/ace-mcp-server/dist/index.js"],
"env": {
"LLM_PROVIDER": "lmstudio",
"LMSTUDIO_BASE_URL": "http://localhost:1234/v1",
"LMSTUDIO_MODEL": "your-model-name",
"ACE_CONTEXT_DIR": "./contexts"
}
}
}
}
```
---
## ๐ Features
### Core ACE Framework
- **Generator**: Creates code using learned strategies
- **Reflector**: Analyzes what worked and what didn't
- **Curator**: Synthesizes insights into playbook updates
### Smart Context Management
- **Incremental Updates**: Only send deltas, not full context
- **Semantic Deduplication**: Automatically merge similar strategies
- **Multi-Context Support**: Separate playbooks for frontend, backend, DevOps, etc.
- **Persistent Storage**: JSON-based storage with configurable backends
### LLM Flexibility
- **OpenAI Support**: Use GPT-4, GPT-3.5-turbo
- **LM Studio Support**: Run local models offline
- **Provider Abstraction**: Easy to add new LLM providers
- **Configurable**: Switch providers via environment variables
### Deployment Options
- **Local Development**: Run on your machine
- **Docker**: Full containerization support
- **Ubuntu VM**: Production deployment ready
- **Cloud**: Deploy to any Node.js-compatible platform
---
## ๐ How It Works
```mermaid
graph LR
A[Your Query] --> B[Generator]
B --> C[Execute Code]
C --> D[Reflector]
D --> E[Extract Insights]
E --> F[Curator]
F --> G[Update Playbook]
G --> H[Better Next Time]
H --> B
```
### Example: Building an Authentication System
1. **First Query**: "Create login endpoint"
- Generator uses generic strategies
- Creates basic endpoint
- Reflector notices: "Used bcrypt for passwords โ", "Missing rate limiting โ"
2. **Curator Updates Playbook**:
- ADD: "Always use bcrypt for password hashing"
- ADD: "Include rate limiting on auth endpoints"
3. **Second Query**: "Create registration endpoint"
- Generator automatically applies learned strategies
- Includes bcrypt AND rate limiting from the start
- Better code, fewer tokens, less iteration
---
## ๐ ๏ธ Available MCP Tools
| Tool | Description | Use Case |
|------|-------------|----------|
| `ace_generate` | Generate code using playbook | Primary code generation |
| `ace_reflect` | Analyze trajectory for insights | After code execution |
| `ace_curate` | Convert insights to updates | Process reflections |
| `ace_update_playbook` | Apply delta operations | Persist learned strategies |
| `ace_get_playbook` | Retrieve current strategies | Review learned knowledge |
| `ace_export_playbook` | Export as JSON | Backup or share playbooks |
---
## ๐ Documentation
| Document | Description | Location |
|----------|-------------|----------|
| **Quick Start** | Installation and first steps | `docs/intro/START_HERE.md` |
| **Full Specification** | Complete project details | `docs/intro/DESCRIPTION.md` |
| **Installation Guide** | Detailed setup instructions | `docs/intro/INSTALLATION.md` |
| **Memory Bank** | Project knowledge base | `memory-bank/` |
---
## ๐ณ Docker Deployment
### Local Development
```bash
# Start all services
docker-compose -f docker-compose.dev.yml up
# Dashboard available at http://localhost:3000
```
### Production (Ubuntu VM)
```bash
# Configure environment
cp .env.example .env
# Edit .env with production settings
# Start services
docker-compose up -d
# View logs
docker-compose logs -f ace-server
```
See `docs/intro/INSTALLATION.md` for detailed deployment guides.
---
## โ๏ธ Configuration
### Environment Variables
```bash
# LLM Provider Selection
LLM_PROVIDER=openai # 'openai' or 'lmstudio'
# OpenAI Configuration
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
# LM Studio Configuration
LMSTUDIO_BASE_URL=http://localhost:1234/v1
LMSTUDIO_MODEL=your-model-name
# ACE Settings
ACE_CONTEXT_DIR=./contexts # Storage directory
ACE_LOG_LEVEL=info # Logging level
ACE_DEDUP_THRESHOLD=0.85 # Similarity threshold (0-1)
ACE_MAX_PLAYBOOK_SIZE=1000 # Max bullets per context
```
---
## ๐๏ธ Project Structure
```
ace-mcp-server/
โโโ src/
โ โโโ core/ # ACE components (Generator, Reflector, Curator)
โ โโโ mcp/ # MCP server and tools
โ โโโ storage/ # Bullet storage and deduplication
โ โโโ llm/ # LLM provider abstraction
โ โโโ utils/ # Utilities (config, logger, errors)
โ โโโ index.ts # Entry point
โโโ dashboard/ # Web dashboard (optional)
โโโ docs/
โ โโโ intro/ # Documentation
โ โโโ archive/ # Archived docs
โโโ memory-bank/ # Project knowledge base
โโโ docker-compose.yml # Production deployment
โโโ docker-compose.dev.yml # Development deployment
โโโ package.json
```
---
## ๐งช Development
```bash
# Install dependencies
npm install
# Run in development mode (with hot reload)
npm run dev
# Build for production
npm run build
# Run tests
npm test
# Lint code
npm run lint
```
---
## ๐ Performance Metrics
Based on Stanford/SambaNova research:
- **86.9% reduction** in context adaptation latency
- **+10.6% improvement** in code generation accuracy
- **30-50% reduction** in storage via semantic deduplication
- **< 2s** for delta operations on 1K bullet playbooks
---
## ๐ค Contributing
Contributions are welcome! Please see our contributing guidelines.
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Write tests
5. Submit a pull request
---
## ๐ License
MIT License - see [LICENSE](LICENSE) file for details
---
## ๐ Links
- **Research Paper**: [Agentic Context Engineering](https://arxiv.org/pdf/2510.04618)
- **MCP Specification**: [modelcontextprotocol.io](https://modelcontextprotocol.io)
- **Cursor AI**: [cursor.sh](https://cursor.sh)
- **GitHub**: [Angry-Robot-Deals/ace-mcp](https://github.com/Angry-Robot-Deals/ace-mcp)
---
## ๐ฌ Support
- ๐ง **Email**: support@example.com
- ๐ฌ **Discussions**: [GitHub Discussions](https://github.com/Angry-Robot-Deals/ace-mcp/discussions)
- ๐ **Issues**: [GitHub Issues](https://github.com/Angry-Robot-Deals/ace-mcp/issues)
- ๐ **Documentation**: See `docs/intro/` directory
---
## ๐ Acknowledgments
Based on research by:
- Stanford University
- SambaNova Systems
Paper: "Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models" (October 2025)
---
<div align="center">
<sub>Built with โค๏ธ for developers who want their AI to learn and improve</sub>
</div>