Skip to main content
Glama

ACE MCP Server

README.mdโ€ข10.3 kB
<div align="center"> <h1>๐Ÿง  ACE MCP Server</h1> <p> <strong>Self-Improving Context for Your AI Coding Assistant</strong><br/> <i>Reduce tokens by 86.9%, improve accuracy by 10.6%</i> </p> <p> <a href="https://www.npmjs.com/package/ace-mcp-server"> <img src="https://img.shields.io/npm/v/ace-mcp-server?style=for-the-badge&color=blue" alt="npm version"/> </a> <a href="https://github.com/Angry-Robot-Deals/ace-mcp/blob/main/LICENSE"> <img src="https://img.shields.io/badge/license-MIT-purple?style=for-the-badge" alt="License"/> </a> <a href="https://arxiv.org/pdf/2510.04618"> <img src="https://img.shields.io/badge/Research-Stanford-red?style=for-the-badge" alt="Research Paper"/> </a> </p> <p> <a href="#-quick-start">Quick Start</a> โ€ข <a href="#-features">Features</a> โ€ข <a href="#-how-it-works">How It Works</a> โ€ข <a href="#-documentation">Documentation</a> โ€ข <a href="#-deployment">Deployment</a> </p> </div> --- ## ๐ŸŒŸ Overview **ACE MCP Server** implements the Agentic Context Engineering framework as a Model Context Protocol (MCP) server for Cursor AI. Your AI assistant learns from its own execution feedback, building a self-improving knowledge base that gets better with every task. Based on [research](https://arxiv.org/pdf/2510.04618) from Stanford University & SambaNova Systems (October 2025). <div align="center"> <table> <tr> <td align="center">๐Ÿ“‰ <strong>86.9% Token Reduction</strong><br/><sub>Incremental updates vs full rewrites</sub></td> <td align="center">๐Ÿ“ˆ <strong>+10.6% Accuracy</strong><br/><sub>Self-learning from feedback</sub></td> <td align="center">๐Ÿ”„ <strong>Continuous Improvement</strong><br/><sub>Gets better with each use</sub></td> <td align="center">๐ŸŽฏ <strong>Context Isolation</strong><br/><sub>Separate playbooks per domain</sub></td> </tr> </table> </div> --- ## ๐ŸŽฏ Why ACE? Traditional AI assistants forget everything between conversations. ACE remembers what works and what doesn't, creating a **playbook** of proven strategies that grows with your team's experience. ### The Problem - ๐Ÿ’ธ High token costs from sending full context every time - ๐Ÿ” Same mistakes repeated across conversations - ๐Ÿ“ No learning from past successes/failures - ๐Ÿคท Generic responses that don't fit your codebase ### The Solution - โœ… Incremental delta updates (send only changes) - โœ… Self-learning from execution feedback - โœ… Semantic deduplication (no redundant knowledge) - โœ… Context-aware strategies per domain --- ## โšก Quick Start ### Prerequisites - Node.js 18+ - Cursor AI or MCP-compatible client - OpenAI API key OR local LM Studio server ### Installation ```bash # Clone repository git clone https://github.com/Angry-Robot-Deals/ace-mcp.git cd ace-mcp # Install dependencies npm install # Configure environment cp .env.example .env # Edit .env with your LLM provider settings # Build npm run build # Start server npm start ``` ### Cursor AI Configuration Add to `~/.cursor/mcp.json`: ```json { "mcpServers": { "ace-context-engine": { "command": "node", "args": ["/absolute/path/to/ace-mcp-server/dist/index.js"], "env": { "LLM_PROVIDER": "openai", "OPENAI_API_KEY": "sk-your-api-key-here", "ACE_CONTEXT_DIR": "./contexts", "ACE_LOG_LEVEL": "info" } } } } ``` ### Using Local LM Studio ```json { "mcpServers": { "ace-context-engine": { "command": "node", "args": ["/absolute/path/to/ace-mcp-server/dist/index.js"], "env": { "LLM_PROVIDER": "lmstudio", "LMSTUDIO_BASE_URL": "http://localhost:1234/v1", "LMSTUDIO_MODEL": "your-model-name", "ACE_CONTEXT_DIR": "./contexts" } } } } ``` --- ## ๐Ÿš€ Features ### Core ACE Framework - **Generator**: Creates code using learned strategies - **Reflector**: Analyzes what worked and what didn't - **Curator**: Synthesizes insights into playbook updates ### Smart Context Management - **Incremental Updates**: Only send deltas, not full context - **Semantic Deduplication**: Automatically merge similar strategies - **Multi-Context Support**: Separate playbooks for frontend, backend, DevOps, etc. - **Persistent Storage**: JSON-based storage with configurable backends ### LLM Flexibility - **OpenAI Support**: Use GPT-4, GPT-3.5-turbo - **LM Studio Support**: Run local models offline - **Provider Abstraction**: Easy to add new LLM providers - **Configurable**: Switch providers via environment variables ### Deployment Options - **Local Development**: Run on your machine - **Docker**: Full containerization support - **Ubuntu VM**: Production deployment ready - **Cloud**: Deploy to any Node.js-compatible platform --- ## ๐Ÿ“Š How It Works ```mermaid graph LR A[Your Query] --> B[Generator] B --> C[Execute Code] C --> D[Reflector] D --> E[Extract Insights] E --> F[Curator] F --> G[Update Playbook] G --> H[Better Next Time] H --> B ``` ### Example: Building an Authentication System 1. **First Query**: "Create login endpoint" - Generator uses generic strategies - Creates basic endpoint - Reflector notices: "Used bcrypt for passwords โœ“", "Missing rate limiting โœ—" 2. **Curator Updates Playbook**: - ADD: "Always use bcrypt for password hashing" - ADD: "Include rate limiting on auth endpoints" 3. **Second Query**: "Create registration endpoint" - Generator automatically applies learned strategies - Includes bcrypt AND rate limiting from the start - Better code, fewer tokens, less iteration --- ## ๐Ÿ› ๏ธ Available MCP Tools | Tool | Description | Use Case | |------|-------------|----------| | `ace_generate` | Generate code using playbook | Primary code generation | | `ace_reflect` | Analyze trajectory for insights | After code execution | | `ace_curate` | Convert insights to updates | Process reflections | | `ace_update_playbook` | Apply delta operations | Persist learned strategies | | `ace_get_playbook` | Retrieve current strategies | Review learned knowledge | | `ace_export_playbook` | Export as JSON | Backup or share playbooks | --- ## ๐Ÿ“– Documentation | Document | Description | Location | |----------|-------------|----------| | **Quick Start** | Installation and first steps | `docs/intro/START_HERE.md` | | **Full Specification** | Complete project details | `docs/intro/DESCRIPTION.md` | | **Installation Guide** | Detailed setup instructions | `docs/intro/INSTALLATION.md` | | **Memory Bank** | Project knowledge base | `memory-bank/` | --- ## ๐Ÿณ Docker Deployment ### Local Development ```bash # Start all services docker-compose -f docker-compose.dev.yml up # Dashboard available at http://localhost:3000 ``` ### Production (Ubuntu VM) ```bash # Configure environment cp .env.example .env # Edit .env with production settings # Start services docker-compose up -d # View logs docker-compose logs -f ace-server ``` See `docs/intro/INSTALLATION.md` for detailed deployment guides. --- ## โš™๏ธ Configuration ### Environment Variables ```bash # LLM Provider Selection LLM_PROVIDER=openai # 'openai' or 'lmstudio' # OpenAI Configuration OPENAI_API_KEY=sk-... OPENAI_MODEL=gpt-4 OPENAI_EMBEDDING_MODEL=text-embedding-3-small # LM Studio Configuration LMSTUDIO_BASE_URL=http://localhost:1234/v1 LMSTUDIO_MODEL=your-model-name # ACE Settings ACE_CONTEXT_DIR=./contexts # Storage directory ACE_LOG_LEVEL=info # Logging level ACE_DEDUP_THRESHOLD=0.85 # Similarity threshold (0-1) ACE_MAX_PLAYBOOK_SIZE=1000 # Max bullets per context ``` --- ## ๐Ÿ—๏ธ Project Structure ``` ace-mcp-server/ โ”œโ”€โ”€ src/ โ”‚ โ”œโ”€โ”€ core/ # ACE components (Generator, Reflector, Curator) โ”‚ โ”œโ”€โ”€ mcp/ # MCP server and tools โ”‚ โ”œโ”€โ”€ storage/ # Bullet storage and deduplication โ”‚ โ”œโ”€โ”€ llm/ # LLM provider abstraction โ”‚ โ”œโ”€โ”€ utils/ # Utilities (config, logger, errors) โ”‚ โ””โ”€โ”€ index.ts # Entry point โ”œโ”€โ”€ dashboard/ # Web dashboard (optional) โ”œโ”€โ”€ docs/ โ”‚ โ”œโ”€โ”€ intro/ # Documentation โ”‚ โ””โ”€โ”€ archive/ # Archived docs โ”œโ”€โ”€ memory-bank/ # Project knowledge base โ”œโ”€โ”€ docker-compose.yml # Production deployment โ”œโ”€โ”€ docker-compose.dev.yml # Development deployment โ””โ”€โ”€ package.json ``` --- ## ๐Ÿงช Development ```bash # Install dependencies npm install # Run in development mode (with hot reload) npm run dev # Build for production npm run build # Run tests npm test # Lint code npm run lint ``` --- ## ๐Ÿ“ˆ Performance Metrics Based on Stanford/SambaNova research: - **86.9% reduction** in context adaptation latency - **+10.6% improvement** in code generation accuracy - **30-50% reduction** in storage via semantic deduplication - **< 2s** for delta operations on 1K bullet playbooks --- ## ๐Ÿค Contributing Contributions are welcome! Please see our contributing guidelines. 1. Fork the repository 2. Create a feature branch 3. Make your changes 4. Write tests 5. Submit a pull request --- ## ๐Ÿ“„ License MIT License - see [LICENSE](LICENSE) file for details --- ## ๐Ÿ”— Links - **Research Paper**: [Agentic Context Engineering](https://arxiv.org/pdf/2510.04618) - **MCP Specification**: [modelcontextprotocol.io](https://modelcontextprotocol.io) - **Cursor AI**: [cursor.sh](https://cursor.sh) - **GitHub**: [Angry-Robot-Deals/ace-mcp](https://github.com/Angry-Robot-Deals/ace-mcp) --- ## ๐Ÿ’ฌ Support - ๐Ÿ“ง **Email**: support@example.com - ๐Ÿ’ฌ **Discussions**: [GitHub Discussions](https://github.com/Angry-Robot-Deals/ace-mcp/discussions) - ๐Ÿ› **Issues**: [GitHub Issues](https://github.com/Angry-Robot-Deals/ace-mcp/issues) - ๐Ÿ“– **Documentation**: See `docs/intro/` directory --- ## ๐Ÿ™ Acknowledgments Based on research by: - Stanford University - SambaNova Systems Paper: "Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models" (October 2025) --- <div align="center"> <sub>Built with โค๏ธ for developers who want their AI to learn and improve</sub> </div>

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Angry-Robot-Deals/ace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server