Skip to main content
Glama

ACE MCP Server

techContext.md7.37 kB
# Technical Context: ACE MCP Server ## Documentation Structure (Updated) ``` docs/ ├── intro/ # Project documentation │ ├── DESCRIPTION.md # Full project specification │ ├── INSTALLATION.md # Installation guide │ ├── START_HERE.md # Quick start guide │ ├── COPY_GUIDE.md # Implementation guide │ ├── ASSETS_CHECKLIST.md # Asset checklist │ ├── INITIALIZATION_REPORT.md # This initialization report │ └── PROJECT_STATUS.md # Current status └── archive/ # Archived documentation memory-bank/ # Memory Bank (Source of Truth) ├── projectbrief.md ├── techContext.md (this file) ├── productContext.md ├── systemPatterns.md ├── activeContext.md ├── tasks.md ├── progress.md ├── style-guide.md ├── creative/ └── reflection/ ``` ## Technology Stack ### Core Runtime - **Language**: TypeScript 5.x - **Runtime**: Node.js (LTS version) - **Package Manager**: npm - **Build Tool**: TypeScript compiler (tsc) ### MCP Protocol - **Transport**: stdio (Standard Input/Output) - **Protocol**: JSON-RPC 2.0 - **Specification**: MCP 2025-06-18 - **Client**: Cursor AI ### ACE Framework Components #### 1. Generator - **Purpose**: Generate code/content using current playbook - **Input**: Query + context_id + optional parameters - **Output**: Trajectory (step-by-step execution log) - **LLM Usage**: Primary code generation #### 2. Reflector - **Purpose**: Analyze trajectory and identify helpful/harmful strategies - **Input**: Trajectory + execution results - **Output**: Insights with helpful/harmful categorization - **LLM Usage**: Reflection and analysis #### 3. Curator - **Purpose**: Convert insights into delta operations - **Input**: Insights + current playbook - **Output**: Delta operations (ADD/UPDATE/DELETE) - **LLM Usage**: Decision making for playbook updates ### Storage Layer #### Bullet Structure ```typescript interface Bullet { id: string; // Unique identifier content: string; // Strategy/guideline text helpful_count: number; // Positive feedback count harmful_count: number; // Negative feedback count created_at: Date; // Creation timestamp last_used_at: Date; // Last usage timestamp embedding?: number[]; // TF-IDF vector for similarity } ``` #### Playbook Structure ```typescript interface Playbook { context_id: string; // Unique context identifier bullets: Bullet[]; // Array of strategies metadata: { created_at: Date; updated_at: Date; total_operations: number; version: string; }; } ``` #### Delta Operations ```typescript type DeltaOperation = | { operation: 'ADD'; bullet: Omit<Bullet, 'id'> } | { operation: 'UPDATE'; bullet_id: string; updates: Partial<Bullet> } | { operation: 'DELETE'; bullet_id: string }; ``` ### Semantic Deduplication - **Algorithm**: Cosine similarity with TF-IDF embeddings - **Default Threshold**: 0.85 (configurable via ACE_DEDUP_THRESHOLD) - **Process**: 1. Compute TF-IDF vectors for all bullets 2. Calculate pairwise cosine similarity 3. Merge bullets above threshold 4. Combine helpful_count values ### LLM Integration #### Provider Abstraction ```typescript interface LLMProvider { name: 'openai' | 'lmstudio'; chat(messages: Message[]): Promise<string>; embed(text: string): Promise<number[]>; } ``` #### OpenAI Configuration - **API**: https://api.openai.com/v1 - **Endpoints**: - `/chat/completions` - Chat generation - `/embeddings` - Text embeddings - **Models**: - Chat: gpt-4, gpt-3.5-turbo - Embeddings: text-embedding-3-small #### LM Studio Configuration - **API**: http://10.242.247.136:11888/v1 - **Endpoints**: - `/v1/chat/completions` - Chat generation - `/v1/completions` - Text completion - `/v1/embeddings` - Text embeddings - `/v1/models` - List available models - **Features**: - OpenAI-compatible API - Local model execution - No API key required - Offline operation ### MCP Tools 1. **ace_generate**: Generate code using playbook 2. **ace_reflect**: Analyze trajectory for insights 3. **ace_curate**: Create delta operations from insights 4. **ace_update_playbook**: Apply deltas to playbook 5. **ace_get_playbook**: Retrieve current playbook 6. **ace_export_playbook**: Export playbook as JSON ### Configuration #### Environment Variables ```bash # Core Settings ACE_CONTEXT_DIR=./contexts # Playbook storage directory ACE_LOG_LEVEL=info # Logging level ACE_DEDUP_THRESHOLD=0.85 # Similarity threshold # LLM Provider LLM_PROVIDER=openai # 'openai' or 'lmstudio' # OpenAI Configuration OPENAI_API_KEY=sk-... # OpenAI API key OPENAI_MODEL=gpt-4 # Model to use OPENAI_EMBEDDING_MODEL=text-embedding-3-small # LM Studio Configuration LMSTUDIO_BASE_URL=http://10.242.247.136:11888/v1 LMSTUDIO_MODEL=local-model # Model name from /v1/models ``` ### Docker Architecture #### Services 1. **ace-mcp-server**: Main MCP server 2. **ace-dashboard**: Web dashboard (optional) #### Volumes - `contexts`: Persistent playbook storage - `logs`: Application logs #### Networks - `ace-network`: Internal communication ### File Structure ``` ace-mcp-server/ ├── src/ │ ├── core/ # ACE components │ │ ├── generator.ts │ │ ├── reflector.ts │ │ └── curator.ts │ ├── mcp/ # MCP protocol │ │ ├── server.ts │ │ └── tools.ts │ ├── storage/ # Storage layer │ │ ├── bullet.ts │ │ ├── playbook.ts │ │ ├── deduplicator.ts │ │ └── embeddings.ts │ ├── llm/ # LLM providers (NEW) │ │ ├── provider.ts │ │ ├── openai.ts │ │ └── lmstudio.ts │ ├── utils/ # Utilities │ │ ├── config.ts │ │ ├── logger.ts │ │ └── errors.ts │ └── index.ts # Entry point ├── dashboard/ # Web dashboard ├── contexts/ # Playbook storage ├── docker/ # Docker configs (NEW) ├── docs/ # Documentation │ ├── intro/ # Project docs │ └── archive/ # Archived docs └── memory-bank/ # Memory Bank ``` ## Dependencies ### Production - `@modelcontextprotocol/sdk`: MCP protocol implementation - `zod`: Schema validation - `openai`: OpenAI API client (NEW) - `axios`: HTTP client for LM Studio (NEW) - `fs-extra`: Enhanced file operations - `uuid`: UUID generation ### Development - `typescript`: Type system - `@types/node`: Node.js types - `ts-node`: TypeScript execution - `nodemon`: Development server - `jest`: Testing framework - `ts-jest`: Jest TypeScript support ## Performance Targets - **Latency**: < 2s for delta operations - **Memory**: < 512MB for 10K bullets - **Deduplication**: Process 1K bullets in < 5s - **Storage**: < 1MB per 1K bullets --- **Last Updated**: 2025-10-28 (Documentation structure updated)

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Angry-Robot-Deals/ace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server