tasks.md•12.9 kB
# Tasks: ACE MCP Server - Implementation Roadmap
## Task ID: ACE-INIT-001
**Title**: Initialize Project with LLM Provider Abstraction and Docker Support
**Priority**: P0 (Critical)
**Status**: In Progress
**Assigned**: Current Session
**Created**: 2025-10-28
---
## Phase 1: Project Analysis & Setup ✅
### 1.1 Memory Bank Initialization ✅
- [x] Create `projectbrief.md`
- [x] Create `techContext.md`
- [x] Create `productContext.md`
- [x] Create `systemPatterns.md`
- [x] Create `activeContext.md`
- [x] Create `tasks.md`
- [ ] Create `progress.md`
- [ ] Create `style-guide.md`
### 1.2 Project Structure Analysis
- [ ] Review existing `package.json` dependencies
- [ ] Analyze `tsconfig.json` configuration
- [ ] Check dashboard files structure
- [ ] Verify `.env.example` completeness
- [ ] List missing TypeScript source files
---
## Phase 2: LLM Provider Abstraction Layer ⏳
### 2.1 Core Provider Interface
**File**: `src/llm/provider.ts`
```typescript
export interface Message {
role: 'system' | 'user' | 'assistant';
content: string;
}
export interface LLMProvider {
name: string;
chat(messages: Message[], options?: ChatOptions): Promise<string>;
embed(text: string): Promise<number[]>;
listModels?(): Promise<string[]>;
}
export interface ChatOptions {
temperature?: number;
maxTokens?: number;
model?: string;
}
export interface LLMProviderConfig {
provider: 'openai' | 'lmstudio';
openai?: OpenAIConfig;
lmstudio?: LMStudioConfig;
}
```
**Checklist**:
- [ ] Define `Message` interface
- [ ] Define `LLMProvider` interface
- [ ] Define `ChatOptions` interface
- [ ] Define `LLMProviderConfig` interface
- [ ] Add JSDoc documentation
- [ ] Export all types
### 2.2 OpenAI Provider Implementation
**File**: `src/llm/openai.ts`
**Dependencies**:
```bash
npm install openai
```
**Checklist**:
- [ ] Install `openai` package
- [ ] Import OpenAI SDK
- [ ] Implement `chat()` method
- [ ] Implement `embed()` method
- [ ] Implement `listModels()` method
- [ ] Add error handling with retry logic
- [ ] Add rate limiting
- [ ] Add request logging
- [ ] Add timeout configuration
- [ ] Write unit tests
**Key Methods**:
```typescript
export class OpenAIProvider implements LLMProvider {
constructor(config: OpenAIConfig);
async chat(messages: Message[], options?: ChatOptions): Promise<string>;
async embed(text: string): Promise<number[]>;
async listModels(): Promise<string[]>;
}
```
### 2.3 LM Studio Provider Implementation
**File**: `src/llm/lmstudio.ts`
**Dependencies**:
```bash
npm install axios
```
**Endpoints**:
- POST `/v1/chat/completions` - Chat generation
- POST `/v1/embeddings` - Text embeddings
- GET `/v1/models` - List models
**Checklist**:
- [ ] Install `axios` package
- [ ] Implement `chat()` using `/v1/chat/completions`
- [ ] Implement `embed()` using `/v1/embeddings`
- [ ] Implement `listModels()` using `/v1/models`
- [ ] Add connection error handling
- [ ] Add timeout configuration
- [ ] Add request logging
- [ ] Handle API format differences from OpenAI
- [ ] Write unit tests
**Key Methods**:
```typescript
export class LMStudioProvider implements LLMProvider {
constructor(config: LMStudioConfig);
async chat(messages: Message[], options?: ChatOptions): Promise<string>;
async embed(text: string): Promise<number[]>;
async listModels(): Promise<string[]>;
}
```
### 2.4 Provider Factory
**File**: `src/llm/factory.ts`
**Checklist**:
- [ ] Create factory function
- [ ] Add provider validation
- [ ] Add configuration validation
- [ ] Return appropriate provider instance
- [ ] Add error handling for unknown providers
- [ ] Write unit tests
```typescript
export function createLLMProvider(config: LLMProviderConfig): LLMProvider {
switch (config.provider) {
case 'openai':
return new OpenAIProvider(config.openai);
case 'lmstudio':
return new LMStudioProvider(config.lmstudio);
default:
throw new Error(`Unknown LLM provider: ${config.provider}`);
}
}
```
### 2.5 Update Existing ACE Components
**Files**: `src/core/generator.ts`, `src/core/reflector.ts`, `src/core/curator.ts`
**Checklist**:
- [ ] Replace hardcoded LLM calls with `LLMProvider` interface
- [ ] Inject provider via constructor
- [ ] Update all `chat()` calls
- [ ] Update all `embed()` calls
- [ ] Add error handling
- [ ] Update tests with mock provider
---
## Phase 3: Configuration Management ⏳
### 3.1 Update Config Utility
**File**: `src/utils/config.ts`
**Checklist**:
- [ ] Add LLM provider configuration schema (Zod)
- [ ] Add OpenAI config schema
- [ ] Add LM Studio config schema
- [ ] Add validation for required fields based on provider
- [ ] Add default values
- [ ] Add environment variable parsing
- [ ] Add config loading from file
- [ ] Write validation tests
**Schema**:
```typescript
const LLMConfigSchema = z.object({
provider: z.enum(['openai', 'lmstudio']),
openai: z.object({
apiKey: z.string(),
model: z.string().default('gpt-4'),
embeddingModel: z.string().default('text-embedding-3-small'),
timeout: z.number().default(30000)
}).optional(),
lmstudio: z.object({
baseUrl: z.string().url(),
model: z.string(),
timeout: z.number().default(60000)
}).optional()
});
```
### 3.2 Update Environment Variables
**File**: `.env.example`
**Checklist**:
- [ ] Add `LLM_PROVIDER=openai|lmstudio`
- [ ] Add OpenAI section with variables
- [ ] Add LM Studio section with variables
- [ ] Add comments explaining each variable
- [ ] Add example values
- [ ] Document required vs optional variables
**Variables to Add**:
```bash
# LLM Provider Configuration
LLM_PROVIDER=openai
# OpenAI Configuration (if LLM_PROVIDER=openai)
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
OPENAI_TIMEOUT=30000
# LM Studio Configuration (if LLM_PROVIDER=lmstudio)
LMSTUDIO_BASE_URL=http://10.242.247.136:11888/v1
LMSTUDIO_MODEL=local-model
LMSTUDIO_TIMEOUT=60000
```
---
## Phase 4: Docker Configuration ⏳
### 4.1 MCP Server Dockerfile
**File**: `Dockerfile`
**Checklist**:
- [ ] Create multi-stage build
- [ ] Stage 1: Build TypeScript
- [ ] Stage 2: Runtime with minimal dependencies
- [ ] Copy built files
- [ ] Set working directory
- [ ] Expose necessary ports (if any)
- [ ] Set environment variables
- [ ] Define ENTRYPOINT
- [ ] Test build locally
**Template**:
```dockerfile
FROM node:lts AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:lts-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
ENV NODE_ENV=production
CMD ["node", "dist/index.js"]
```
### 4.2 Dashboard Dockerfile
**File**: `dashboard/Dockerfile`
**Checklist**:
- [ ] Use nginx base image
- [ ] Copy HTML/CSS/JS files
- [ ] Configure nginx for SPA
- [ ] Expose port 80
- [ ] Test build locally
**Template**:
```dockerfile
FROM nginx:alpine
COPY index.html /usr/share/nginx/html/
COPY app.js /usr/share/nginx/html/
COPY style.css /usr/share/nginx/html/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
```
### 4.3 Docker Compose - Development
**File**: `docker-compose.dev.yml`
**Checklist**:
- [ ] Define ace-server service
- [ ] Define ace-dashboard service
- [ ] Configure volumes for hot-reload
- [ ] Configure environment variables
- [ ] Configure networks
- [ ] Add health checks
- [ ] Test locally
**Services**:
```yaml
services:
ace-server:
build: .
volumes:
- ./src:/app/src
- ./contexts:/app/contexts
- ./logs:/app/logs
environment:
- NODE_ENV=development
- LLM_PROVIDER=${LLM_PROVIDER}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- LMSTUDIO_BASE_URL=${LMSTUDIO_BASE_URL}
networks:
- ace-network
ace-dashboard:
build: ./dashboard
ports:
- "3000:80"
depends_on:
- ace-server
networks:
- ace-network
volumes:
contexts:
logs:
networks:
ace-network:
```
### 4.4 Docker Compose - Production
**File**: `docker-compose.yml`
**Checklist**:
- [ ] Define ace-server service
- [ ] Define ace-dashboard service
- [ ] Use production images
- [ ] Configure persistent volumes
- [ ] Configure restart policies
- [ ] Add resource limits
- [ ] Add health checks
- [ ] Test deployment
**Differences from dev**:
- No volume mounting for source code
- Restart policy: `always`
- Resource limits specified
- Production environment variables
### 4.5 Docker Ignore File
**File**: `.dockerignore`
**Checklist**:
- [ ] Add node_modules
- [ ] Add .git
- [ ] Add logs
- [ ] Add tests
- [ ] Add documentation
- [ ] Add development files
---
## Phase 5: Testing & Validation ⏳
### 5.1 Unit Tests for LLM Providers
**Files**: `src/llm/__tests__/`
**Checklist**:
- [ ] Test OpenAI provider chat()
- [ ] Test OpenAI provider embed()
- [ ] Test LM Studio provider chat()
- [ ] Test LM Studio provider embed()
- [ ] Test provider factory
- [ ] Test error handling
- [ ] Test timeout behavior
- [ ] Test retry logic
- [ ] Mock HTTP requests
- [ ] Achieve >80% coverage
### 5.2 Integration Tests
**Files**: `tests/integration/`
**Checklist**:
- [ ] Test full ACE workflow with OpenAI
- [ ] Test full ACE workflow with LM Studio
- [ ] Test provider switching
- [ ] Test configuration loading
- [ ] Test Docker container startup
- [ ] Test dashboard accessibility
### 5.3 Docker Testing
**Checklist**:
- [ ] Test `docker build` for server
- [ ] Test `docker build` for dashboard
- [ ] Test `docker-compose up` locally
- [ ] Test volume persistence
- [ ] Test network connectivity
- [ ] Test environment variable passing
- [ ] Test logs accessibility
---
## Phase 6: Documentation ⏳
### 6.1 LM Studio Setup Guide
**File**: `docs/LM_STUDIO_SETUP.md`
**Checklist**:
- [ ] Prerequisites
- [ ] LM Studio installation
- [ ] Model selection and download
- [ ] Server configuration
- [ ] Starting the server
- [ ] Verifying endpoints
- [ ] Configuring ACE MCP Server
- [ ] Troubleshooting
### 6.2 Docker Deployment Guide
**File**: `docs/DOCKER_DEPLOYMENT.md`
**Checklist**:
- [ ] Local deployment instructions
- [ ] Ubuntu VM deployment instructions
- [ ] Environment configuration
- [ ] Volume management
- [ ] Network configuration
- [ ] Monitoring logs
- [ ] Backup strategies
- [ ] Troubleshooting
### 6.3 Configuration Guide
**File**: `docs/CONFIGURATION.md`
**Checklist**:
- [ ] All environment variables explained
- [ ] Provider selection guide
- [ ] OpenAI configuration
- [ ] LM Studio configuration
- [ ] Advanced settings
- [ ] Security considerations
### 6.4 Update Main README
**File**: `README.md`
**Checklist**:
- [ ] Add Docker quick start
- [ ] Add LLM provider options
- [ ] Add deployment badge
- [ ] Update installation section
- [ ] Add links to new guides
- [ ] Update feature list
---
## Phase 7: Deployment Testing ⏳
### 7.1 Local Docker Testing
**Checklist**:
- [ ] Test with OpenAI provider
- [ ] Test with LM Studio provider
- [ ] Test dashboard access
- [ ] Test MCP server connectivity
- [ ] Test playbook persistence
- [ ] Test logs generation
- [ ] Verify performance
### 7.2 Ubuntu VM Testing
**Checklist**:
- [ ] Set up clean Ubuntu VM
- [ ] Install Docker
- [ ] Install Docker Compose
- [ ] Clone repository
- [ ] Configure environment
- [ ] Run docker-compose up
- [ ] Test from remote client
- [ ] Verify dashboard access
- [ ] Test playbook persistence
---
## Completion Criteria
### Must Have (P0)
- [ ] LLM provider abstraction implemented
- [ ] OpenAI provider working
- [ ] LM Studio provider working
- [ ] Docker configurations complete
- [ ] Local Docker deployment working
- [ ] Basic documentation complete
### Should Have (P1)
- [ ] Ubuntu VM deployment tested
- [ ] Comprehensive tests (>80% coverage)
- [ ] Complete documentation
- [ ] Performance benchmarks
### Nice to Have (P2)
- [ ] Monitoring/metrics
- [ ] Automated backups
- [ ] CI/CD pipeline
- [ ] Load testing
---
## Dependencies
### External
- Docker Desktop (local)
- Docker + Docker Compose (Ubuntu VM)
- OpenAI API key (for OpenAI provider)
- LM Studio server (for lmstudio provider)
### Internal
- TypeScript source files (to be implemented)
- Memory Bank (completed)
- Package dependencies (to be installed)
---
## Timeline
- **Phase 1**: 1 hour ✅
- **Phase 2**: 3 hours
- **Phase 3**: 1 hour
- **Phase 4**: 2 hours
- **Phase 5**: 3 hours
- **Phase 6**: 2 hours
- **Phase 7**: 2 hours
- **Total**: ~14 hours
---
## Notes
- Work is being done in parallel where possible
- Testing happens incrementally, not just in Phase 5
- Documentation is updated as features are implemented
- User prefers Russian communication but English code
- User has LM Studio at http://10.242.247.136:11888
- Both local and remote deployment are equally important
---
## Last Updated
2025-10-28 - Initial task breakdown