Skip to main content
Glama

ACE MCP Server

style-guide.md15.6 kB
# Style Guide: ACE MCP Server ## Language & Communication ### Code Language - **Primary Language**: English - **Code**: English (variables, functions, classes, types) - **Comments**: English - **Documentation**: English - **Commit Messages**: English ### User Communication - **Communication with User**: Russian - **Error Messages for Developers**: English - **Log Messages**: English - **User-facing UI**: English (dashboard) --- ## TypeScript Style ### General Principles 1. **Type Safety**: Use strict TypeScript, no `any` unless absolutely necessary 2. **Explicitness**: Prefer explicit types over inferred where it improves clarity 3. **Immutability**: Use `const` by default, `let` only when mutation is needed 4. **Functional Style**: Prefer pure functions and immutable data transformations ### Naming Conventions #### Files ```typescript // PascalCase for classes Generator.ts OpenAIProvider.ts PlaybookManager.ts // camelCase for utilities and functions config.ts logger.ts createProvider.ts // kebab-case for multi-word utilities semantic-deduplicator.ts bullet-storage.ts // Suffix for type files types.ts interfaces.ts ``` #### Variables and Functions ```typescript // camelCase for variables and functions const contextId = 'backend'; const bulletCount = 10; function generateTrajectory(query: string): Trajectory { } async function loadPlaybook(contextId: string): Promise<Playbook> { } // SCREAMING_SNAKE_CASE for constants const MAX_RETRIES = 3; const DEFAULT_THRESHOLD = 0.85; const API_TIMEOUT_MS = 30000; ``` #### Interfaces and Types ```typescript // PascalCase with descriptive names interface LLMProvider { } interface PlaybookMetadata { } type DeltaOperation = ADD | UPDATE | DELETE; // Prefix interfaces with I only if there's ambiguity interface IProvider { } // ❌ Avoid interface Provider { } // ✅ Preferred // Suffix types with Type if needed for clarity type ConfigType = { }; // When Config is a class ``` #### Classes ```typescript // PascalCase class OpenAIProvider { } class PlaybookManager { } class SemanticDeduplicator { } // Private fields with underscore (optional, use private keyword) class Example { private _cache: Map<string, any>; // Optional style private cache: Map<string, any>; // Preferred with 'private' } ``` #### Enums ```typescript // PascalCase for enum name, SCREAMING_SNAKE_CASE for values enum OperationType { ADD = 'ADD', UPDATE = 'UPDATE', DELETE = 'DELETE' } // Or use string unions (preferred) type OperationType = 'ADD' | 'UPDATE' | 'DELETE'; ``` ### Code Structure #### File Organization ```typescript // 1. Imports (external first, then internal) import { z } from 'zod'; import { OpenAI } from 'openai'; import { LLMProvider, Message } from './provider'; import { logger } from '../utils/logger'; // 2. Types and Interfaces export interface Config { } export type Status = 'active' | 'inactive'; // 3. Constants const DEFAULT_MODEL = 'gpt-4'; const MAX_TOKENS = 4096; // 4. Main implementation export class OpenAIProvider implements LLMProvider { // Public properties first public readonly name = 'openai'; // Private properties private client: OpenAI; private config: OpenAIConfig; // Constructor constructor(config: OpenAIConfig) { this.config = config; this.client = new OpenAI({ apiKey: config.apiKey }); } // Public methods async chat(messages: Message[]): Promise<string> { } // Private methods private formatMessages(messages: Message[]): any[] { } } // 5. Helper functions (if any) function validateConfig(config: unknown): Config { } ``` #### Function Structure ```typescript // Clear, single-responsibility functions async function generateTrajectory( query: string, contextId: string, provider: LLMProvider ): Promise<Trajectory> { // 1. Input validation if (!query.trim()) { throw new Error('Query cannot be empty'); } // 2. Preparation const playbook = await loadPlaybook(contextId); const bullets = selectRelevantBullets(playbook, query); // 3. Main logic const messages = buildMessages(query, bullets); const response = await provider.chat(messages); // 4. Post-processing const trajectory = parseResponse(response); // 5. Logging and return logger.info('Trajectory generated', { contextId, queryLength: query.length }); return trajectory; } ``` ### Type Definitions #### Prefer Interfaces for Objects ```typescript // ✅ Use interface for object shapes interface Bullet { id: string; content: string; helpful_count: number; created_at: Date; } // ✅ Use type for unions, intersections, utilities type DeltaOperation = AddOperation | UpdateOperation | DeleteOperation; type PartialBullet = Partial<Bullet>; ``` #### Use Discriminated Unions ```typescript // ✅ Discriminated unions for variants type DeltaOperation = | { operation: 'ADD'; bullet: Omit<Bullet, 'id'> } | { operation: 'UPDATE'; bullet_id: string; updates: Partial<Bullet> } | { operation: 'DELETE'; bullet_id: string }; // Usage with type narrowing function applyDelta(delta: DeltaOperation): void { switch (delta.operation) { case 'ADD': // TypeScript knows delta.bullet exists here break; case 'UPDATE': // TypeScript knows delta.bullet_id and delta.updates exist break; case 'DELETE': // TypeScript knows only delta.bullet_id exists break; } } ``` #### Generic Types ```typescript // Use descriptive generic names interface Repository<T> { get(id: string): Promise<T>; save(item: T): Promise<void>; } // Use T, U, V for simple generics function map<T, U>(arr: T[], fn: (item: T) => U): U[] { } // Use descriptive names for complex generics function createProvider<TConfig extends ProviderConfig>( config: TConfig ): Provider<TConfig> { } ``` ### Error Handling #### Use Custom Error Classes ```typescript // Define error hierarchy export class ACEError extends Error { constructor(message: string) { super(message); this.name = 'ACEError'; } } export class LLMProviderError extends ACEError { constructor( message: string, public readonly provider: string, public readonly originalError?: Error ) { super(message); this.name = 'LLMProviderError'; } } export class ConfigurationError extends ACEError { constructor(message: string, public readonly field: string) { super(message); this.name = 'ConfigurationError'; } } ``` #### Error Handling Patterns ```typescript // ✅ Try-catch with specific error handling async function callLLM(provider: LLMProvider): Promise<string> { try { return await provider.chat(messages); } catch (error) { if (error instanceof LLMProviderError) { logger.error('LLM provider failed', { error }); throw error; // Re-throw domain errors } // Wrap unknown errors throw new LLMProviderError( 'Unexpected error calling LLM', provider.name, error as Error ); } } // ✅ Use Result type for expected failures type Result<T, E = Error> = | { ok: true; value: T } | { ok: false; error: E }; async function tryLoadPlaybook(id: string): Promise<Result<Playbook>> { try { const playbook = await loadPlaybook(id); return { ok: true, value: playbook }; } catch (error) { return { ok: false, error: error as Error }; } } ``` ### Async/Await ```typescript // ✅ Always use async/await, not .then() async function loadAndProcess(): Promise<void> { const playbook = await loadPlaybook('backend'); const processed = await processPlaybook(playbook); await savePlaybook(processed); } // ✅ Handle Promise.all for parallel operations async function loadMultiplePlaybooks(ids: string[]): Promise<Playbook[]> { return Promise.all(ids.map(id => loadPlaybook(id))); } // ✅ Use try-catch for error handling async function safeOperation(): Promise<void> { try { await riskyOperation(); } catch (error) { logger.error('Operation failed', { error }); throw error; } } ``` ### Comments and Documentation #### JSDoc for Public APIs ```typescript /** * Generates a trajectory using the playbook and LLM provider. * * @param query - The user's query or task description * @param contextId - The context identifier for playbook selection * @param provider - The LLM provider to use for generation * @returns A trajectory containing execution steps * @throws {LLMProviderError} If the LLM provider fails * @throws {ConfigurationError} If the context is not found * * @example * ```typescript * const trajectory = await generator.generate( * 'Create a login endpoint', * 'backend', * openaiProvider * ); * ``` */ export async function generate( query: string, contextId: string, provider: LLMProvider ): Promise<Trajectory> { } ``` #### Inline Comments ```typescript // ✅ Explain "why", not "what" // Use cosine similarity instead of exact match to handle paraphrasing const similarity = cosineSimilarity(embedding1, embedding2); // ❌ Don't state the obvious // Increment the counter counter++; // ✅ Explain complex logic // Merge bullets with similarity > threshold to avoid redundancy // Keep the bullet with higher helpful_count and update metadata if (similarity > threshold) { const kept = a.helpful_count > b.helpful_count ? a : b; const discarded = kept === a ? b : a; kept.helpful_count += discarded.helpful_count; } ``` ### Testing Style #### Test File Naming ```typescript // Place tests next to implementation or in __tests__ folder src/llm/openai.ts src/llm/__tests__/openai.test.ts // Or mirror structure src/llm/openai.ts tests/llm/openai.test.ts ``` #### Test Structure ```typescript import { describe, it, expect, beforeEach, jest } from '@jest/globals'; import { OpenAIProvider } from '../openai'; describe('OpenAIProvider', () => { let provider: OpenAIProvider; beforeEach(() => { provider = new OpenAIProvider({ apiKey: 'test-key', model: 'gpt-4' }); }); describe('chat()', () => { it('should return a response for valid messages', async () => { const messages = [{ role: 'user', content: 'Hello' }]; const response = await provider.chat(messages); expect(response).toBeTruthy(); expect(typeof response).toBe('string'); }); it('should throw LLMProviderError on API failure', async () => { // Mock API failure jest.spyOn(provider as any, 'client').mockRejectedValue(new Error('API Error')); await expect(provider.chat([])).rejects.toThrow(LLMProviderError); }); }); }); ``` --- ## Git Commit Style ### Commit Message Format ``` <type>(<scope>): <subject> <body> <footer> ``` ### Types - `feat`: New feature - `fix`: Bug fix - `docs`: Documentation changes - `style`: Code style changes (formatting, no logic change) - `refactor`: Code refactoring - `test`: Adding or updating tests - `chore`: Maintenance tasks ### Examples ```bash feat(llm): add LM Studio provider implementation Implement LMStudioProvider class with chat and embed methods. Use axios for HTTP requests to local LM Studio server. Closes #42 --- fix(deduplicator): handle empty playbook edge case Deduplicator was failing when playbook had no bullets. Add early return for empty playbook. --- docs(readme): update Docker deployment instructions Add section for Ubuntu VM deployment with Docker Compose. Include environment variable configuration examples. ``` --- ## Code Quality Standards ### Linting - Use ESLint with TypeScript plugin - Run `npm run lint` before commits - Fix all errors, minimize warnings ### Formatting - Use Prettier for consistent formatting - Line length: 100 characters - Indentation: 2 spaces - Semicolons: Yes - Quotes: Single quotes for strings - Trailing commas: Yes ### Type Coverage - Aim for 100% type coverage - Use `strict` mode in tsconfig.json - No `any` types (use `unknown` if needed) - Enable `noImplicitAny`, `strictNullChecks`, `strictFunctionTypes` ### Test Coverage - Target: >80% overall coverage - Critical paths: >95% coverage - Test edge cases and error conditions - Mock external dependencies --- ## Performance Guidelines ### Avoid Premature Optimization ```typescript // ✅ Clear, maintainable code first function findBullet(bullets: Bullet[], id: string): Bullet | undefined { return bullets.find(b => b.id === id); } // ❌ Don't optimize without profiling function findBullet(bullets: Bullet[], id: string): Bullet | undefined { // Binary search for "performance" (but bullets aren't sorted by ID) let left = 0, right = bullets.length - 1; // ...complex binary search that doesn't help... } ``` ### Profile Before Optimizing 1. Write clear, correct code 2. Profile to find bottlenecks 3. Optimize hot paths only 4. Measure improvement ### Known Performance Considerations - Deduplication is O(n²) - acceptable for <10K bullets - File I/O is cached in memory - LLM calls are the bottleneck, not our code - TF-IDF computation can be parallelized if needed --- ## Security Guidelines ### Input Validation ```typescript // ✅ Always validate external input import { z } from 'zod'; const QuerySchema = z.object({ query: z.string().min(1).max(10000), contextId: z.string().regex(/^[a-z0-9-]+$/) }); function handleQuery(input: unknown) { const validated = QuerySchema.parse(input); // Now safe to use validated.query and validated.contextId } ``` ### Secrets Management ```typescript // ✅ Never hardcode secrets const apiKey = process.env.OPENAI_API_KEY; // ❌ Don't commit secrets const apiKey = 'sk-abc123...'; // Never do this // ✅ Validate secrets exist if (!process.env.OPENAI_API_KEY) { throw new ConfigurationError('OPENAI_API_KEY is required', 'OPENAI_API_KEY'); } ``` ### Logging Sensitive Data ```typescript // ✅ Don't log secrets or PII logger.info('API call made', { provider: 'openai', model: 'gpt-4' // Don't log apiKey or user data }); // ✅ Redact in logs if needed logger.debug('Config loaded', { ...config, apiKey: config.apiKey ? '[REDACTED]' : undefined }); ``` --- ## Documentation Standards ### README Structure 1. Project title and description 2. Quick start / Installation 3. Usage examples 4. Configuration 5. API reference 6. Contributing guidelines 7. License ### Code Documentation - Public APIs: Full JSDoc with examples - Private functions: Brief comment explaining purpose - Complex algorithms: Detailed comments - Type definitions: JSDoc for non-obvious types ### Inline Documentation ```typescript // ✅ Good inline documentation /** * Semantic deduplication using cosine similarity. * * This compares TF-IDF embeddings of bullets to find near-duplicates. * Bullets above the similarity threshold are merged by: * 1. Keeping the bullet with higher helpful_count * 2. Summing helpful_count values * 3. Preserving the better-written content */ export class SemanticDeduplicator { } ``` --- ## Best Practices Summary 1. **Type Safety**: Use strict TypeScript, avoid `any` 2. **Error Handling**: Use custom errors, log appropriately 3. **Async/Await**: Always use async/await, handle errors 4. **Testing**: Write tests, aim for >80% coverage 5. **Documentation**: JSDoc for public APIs, explain "why" in comments 6. **Security**: Validate input, never hardcode secrets 7. **Performance**: Profile before optimizing 8. **Git**: Use conventional commits, meaningful messages 9. **Code Review**: All code reviewed before merge 10. **Simplicity**: Prefer clear code over clever code --- **Last Updated**: 2025-10-28 **Version**: 1.0

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Angry-Robot-Deals/ace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server