Skip to main content
Glama

ACE MCP Server


๐ŸŒŸ Overview

ACE MCP Server implements the Agentic Context Engineering framework as a Model Context Protocol (MCP) server for Cursor AI. Your AI assistant learns from its own execution feedback, building a self-improving knowledge base that gets better with every task.

Based on research from Stanford University & SambaNova Systems (October 2025).


๐ŸŽฏ Why ACE?

Traditional AI assistants forget everything between conversations. ACE remembers what works and what doesn't, creating a playbook of proven strategies that grows with your team's experience.

The Problem

  • ๐Ÿ’ธ High token costs from sending full context every time

  • ๐Ÿ” Same mistakes repeated across conversations

  • ๐Ÿ“ No learning from past successes/failures

  • ๐Ÿคท Generic responses that don't fit your codebase

The Solution

  • โœ… Incremental delta updates (send only changes)

  • โœ… Self-learning from execution feedback

  • โœ… Semantic deduplication (no redundant knowledge)

  • โœ… Context-aware strategies per domain


โšก Quick Start

Prerequisites

  • Node.js 18+

  • Cursor AI or MCP-compatible client

  • OpenAI API key OR local LM Studio server

Installation

# Clone repository git clone https://github.com/Angry-Robot-Deals/ace-mcp.git cd ace-mcp # Install dependencies npm install # Configure environment cp .env.example .env # Edit .env with your LLM provider settings # Build npm run build # Start server npm start

Cursor AI Configuration

Add to ~/.cursor/mcp.json:

{ "mcpServers": { "ace-context-engine": { "command": "node", "args": ["/absolute/path/to/ace-mcp-server/dist/index.js"], "env": { "LLM_PROVIDER": "openai", "OPENAI_API_KEY": "sk-your-api-key-here", "ACE_CONTEXT_DIR": "./contexts", "ACE_LOG_LEVEL": "info" } } } }

Using Local LM Studio

{ "mcpServers": { "ace-context-engine": { "command": "node", "args": ["/absolute/path/to/ace-mcp-server/dist/index.js"], "env": { "LLM_PROVIDER": "lmstudio", "LMSTUDIO_BASE_URL": "http://localhost:1234/v1", "LMSTUDIO_MODEL": "your-model-name", "ACE_CONTEXT_DIR": "./contexts" } } } }

๐Ÿš€ Features

Core ACE Framework

  • Generator: Creates code using learned strategies

  • Reflector: Analyzes what worked and what didn't

  • Curator: Synthesizes insights into playbook updates

Smart Context Management

  • Incremental Updates: Only send deltas, not full context

  • Semantic Deduplication: Automatically merge similar strategies

  • Multi-Context Support: Separate playbooks for frontend, backend, DevOps, etc.

  • Persistent Storage: JSON-based storage with configurable backends

LLM Flexibility

  • OpenAI Support: Use GPT-4, GPT-3.5-turbo

  • LM Studio Support: Run local models offline

  • Provider Abstraction: Easy to add new LLM providers

  • Configurable: Switch providers via environment variables

Deployment Options

  • Local Development: Run on your machine

  • Docker: Full containerization support

  • Ubuntu VM: Production deployment ready

  • Cloud: Deploy to any Node.js-compatible platform


๐Ÿ“Š How It Works

graph LR A[Your Query] --> B[Generator] B --> C[Execute Code] C --> D[Reflector] D --> E[Extract Insights] E --> F[Curator] F --> G[Update Playbook] G --> H[Better Next Time] H --> B

Example: Building an Authentication System

  1. First Query: "Create login endpoint"

    • Generator uses generic strategies

    • Creates basic endpoint

    • Reflector notices: "Used bcrypt for passwords โœ“", "Missing rate limiting โœ—"

  2. Curator Updates Playbook:

    • ADD: "Always use bcrypt for password hashing"

    • ADD: "Include rate limiting on auth endpoints"

  3. Second Query: "Create registration endpoint"

    • Generator automatically applies learned strategies

    • Includes bcrypt AND rate limiting from the start

    • Better code, fewer tokens, less iteration


๐Ÿ› ๏ธ Available MCP Tools

Tool

Description

Use Case

ace_generate

Generate code using playbook

Primary code generation

ace_reflect

Analyze trajectory for insights

After code execution

ace_curate

Convert insights to updates

Process reflections

ace_update_playbook

Apply delta operations

Persist learned strategies

ace_get_playbook

Retrieve current strategies

Review learned knowledge

ace_export_playbook

Export as JSON

Backup or share playbooks


๐Ÿ“– Documentation

Document

Description

Location

Quick Start

Installation and first steps

docs/intro/START_HERE.md

Full Specification

Complete project details

docs/intro/DESCRIPTION.md

Installation Guide

Detailed setup instructions

docs/intro/INSTALLATION.md

Memory Bank

Project knowledge base

memory-bank/


๐Ÿณ Docker Deployment

Local Development

# Start all services docker-compose -f docker-compose.dev.yml up # Dashboard available at http://localhost:3000

Production (Ubuntu VM)

# Configure environment cp .env.example .env # Edit .env with production settings # Start services docker-compose up -d # View logs docker-compose logs -f ace-server

See docs/intro/INSTALLATION.md for detailed deployment guides.


โš™๏ธ Configuration

Environment Variables

# LLM Provider Selection LLM_PROVIDER=openai # 'openai' or 'lmstudio' # OpenAI Configuration OPENAI_API_KEY=sk-... OPENAI_MODEL=gpt-4 OPENAI_EMBEDDING_MODEL=text-embedding-3-small # LM Studio Configuration LMSTUDIO_BASE_URL=http://localhost:1234/v1 LMSTUDIO_MODEL=your-model-name # ACE Settings ACE_CONTEXT_DIR=./contexts # Storage directory ACE_LOG_LEVEL=info # Logging level ACE_DEDUP_THRESHOLD=0.85 # Similarity threshold (0-1) ACE_MAX_PLAYBOOK_SIZE=1000 # Max bullets per context

๐Ÿ—๏ธ Project Structure

ace-mcp-server/ โ”œโ”€โ”€ src/ โ”‚ โ”œโ”€โ”€ core/ # ACE components (Generator, Reflector, Curator) โ”‚ โ”œโ”€โ”€ mcp/ # MCP server and tools โ”‚ โ”œโ”€โ”€ storage/ # Bullet storage and deduplication โ”‚ โ”œโ”€โ”€ llm/ # LLM provider abstraction โ”‚ โ”œโ”€โ”€ utils/ # Utilities (config, logger, errors) โ”‚ โ””โ”€โ”€ index.ts # Entry point โ”œโ”€โ”€ dashboard/ # Web dashboard (optional) โ”œโ”€โ”€ docs/ โ”‚ โ”œโ”€โ”€ intro/ # Documentation โ”‚ โ””โ”€โ”€ archive/ # Archived docs โ”œโ”€โ”€ memory-bank/ # Project knowledge base โ”œโ”€โ”€ docker-compose.yml # Production deployment โ”œโ”€โ”€ docker-compose.dev.yml # Development deployment โ””โ”€โ”€ package.json

๐Ÿงช Development

# Install dependencies npm install # Run in development mode (with hot reload) npm run dev # Build for production npm run build # Run tests npm test # Lint code npm run lint

๐Ÿ“ˆ Performance Metrics

Based on Stanford/SambaNova research:

  • 86.9% reduction in context adaptation latency

  • +10.6% improvement in code generation accuracy

  • 30-50% reduction in storage via semantic deduplication

  • < 2s for delta operations on 1K bullet playbooks


๐Ÿค Contributing

Contributions are welcome! Please see our contributing guidelines.

  1. Fork the repository

  2. Create a feature branch

  3. Make your changes

  4. Write tests

  5. Submit a pull request


๐Ÿ“„ License

MIT License - see LICENSE file for details


๐Ÿ”— Links


๐Ÿ’ฌ Support


๐Ÿ™ Acknowledgments

Based on research by:

  • Stanford University

  • SambaNova Systems

Paper: "Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models" (October 2025)


-
security - not tested
F
license - not found
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

Implements Agentic Context Engineering to create self-improving AI coding assistants that learn from execution feedback and build persistent knowledge playbooks. Reduces token usage by 86.9% while improving code accuracy by 10.6% through incremental context updates.

  1. ๐ŸŽฏ Why ACE?
    1. The Problem
    2. The Solution
  2. โšก Quick Start
    1. Prerequisites
    2. Installation
    3. Cursor AI Configuration
    4. Using Local LM Studio
  3. ๐Ÿš€ Features
    1. Core ACE Framework
    2. Smart Context Management
    3. LLM Flexibility
    4. Deployment Options
  4. ๐Ÿ“Š How It Works
    1. Example: Building an Authentication System
  5. ๐Ÿ› ๏ธ Available MCP Tools
    1. ๐Ÿ“– Documentation
      1. ๐Ÿณ Docker Deployment
        1. Local Development
        2. Production (Ubuntu VM)
      2. โš™๏ธ Configuration
        1. Environment Variables
      3. ๐Ÿ—๏ธ Project Structure
        1. ๐Ÿงช Development
          1. ๐Ÿ“ˆ Performance Metrics
            1. ๐Ÿค Contributing
              1. ๐Ÿ“„ License
                1. ๐Ÿ”— Links
                  1. ๐Ÿ’ฌ Support
                    1. ๐Ÿ™ Acknowledgments

                      MCP directory API

                      We provide all the information about MCP servers via our MCP API.

                      curl -X GET 'https://glama.ai/api/mcp/v1/servers/Angry-Robot-Deals/ace-mcp'

                      If you have feedback or need assistance with the MCP directory API, please join our Discord server