Role-Specific Context MCP Server
by Chris-June
Verified
# Role-Context MCP: Technical Architecture
## Overview
The Role-Context MCP is designed with a modular architecture that separates concerns and allows for extensibility. This document provides a detailed explanation of the system's architecture, components, and how they interact.
## System Architecture
### High-Level Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ Client Applications │
└───────────────────────────────┬─────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ HTTP/MCP Server │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌──────────────┐ │
│ │ API Handlers │◄──►│ MCP Handlers │◄──►│ Tool Handlers │ │
│ └────────┬────────┘ └────────┬────────┘ └──────┬───────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌──────────────┐ │
│ │ Role Manager │◄──►│ Context Manager │◄──►│Memory Manager │ │
│ └────────┬────────┘ └────────┬────────┘ └──────┬───────┘ │
│ │ │ │ │
│ └──────────────────────┼─────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ OpenAI Client │ │
│ └─────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ OpenAI API │
└─────────────────────────────────────────────────────────────────┘
```
### Component Breakdown
1. **Server Layer**
- **HTTP Server**: Express-based REST API server
- **MCP Server**: Model Context Protocol server implementation
2. **Handler Layer**
- **API Handlers**: Process HTTP requests and responses
- **MCP Handlers**: Process MCP requests and responses
- **Tool Handlers**: Implement MCP tools functionality
3. **Manager Layer**
- **Role Manager**: Manages role definitions and processing
- **Context Manager**: Manages context switching and triggers
- **Memory Manager**: Manages memory storage and retrieval
4. **Client Layer**
- **OpenAI Client**: Communicates with the OpenAI API
## Component Details
### Role Manager
The Role Manager is responsible for managing role definitions and processing queries using those roles.
#### Key Responsibilities
- Storing and retrieving role definitions
- Creating, updating, and deleting roles
- Generating complete system prompts for roles
- Processing queries using roles
- Managing role tones
#### Class Structure
```typescript
class RoleManager {
private roles: Map<string, Role>;
private aiClient: AIClient;
constructor(defaultRoles: Role[], aiClient: AIClient) {...}
public getRoles(): Role[] {...}
public getRole(roleId: string): Role {...}
public createRole(role: Role): Role {...}
public updateRole(roleId: string, updates: Partial<Role>): Role {...}
public deleteRole(roleId: string): boolean {...}
public changeRoleTone(roleId: string, tone: string): Role {...}
public generateCompletePrompt(roleId: string, customInstructions?: string): string {...}
public processQuery(roleId: string, query: string, customInstructions?: string): Promise<string> {...}
}
```
### Context Manager
The Context Manager is responsible for managing context switching and triggers.
#### Key Responsibilities
- Maintaining context stacks for agents
- Switching contexts based on triggers or explicit requests
- Storing context history
- Managing context triggers
#### Class Structure
```typescript
class ContextManager {
private contextStacks: Map<string, ContextStack>;
private contextTriggers: ContextTrigger[];
private memoryManager: MemoryManager;
constructor(triggers: ContextTrigger[], memoryManager: MemoryManager) {...}
public switchContext(request: ContextSwitchRequest): Promise<ContextSwitchResponse> {...}
public getCurrentContext(agentId: string): ContextState {...}
public getContextHistory(agentId: string): ContextState[] {...}
public addContextTrigger(trigger: ContextTrigger): ContextTrigger {...}
public updateContextTrigger(triggerId: string, updates: Partial<ContextTrigger>): ContextTrigger {...}
public deleteContextTrigger(triggerId: string): boolean {...}
public checkInputForTriggers(agentId: string, input: string): Promise<ContextSwitchResponse[]> {...}
public handleMultiModalContext(request: MultiModalContextRequest): Promise<ContextSwitchResponse> {...}
}
```
### Memory Manager
The Memory Manager is responsible for storing and retrieving memories.
#### Key Responsibilities
- Storing memories with vector embeddings
- Retrieving memories based on relevance to queries
- Managing memory types and importance levels
- Implementing time-to-live (TTL) for memories
#### Class Structure
```typescript
class MemoryManager {
private provider: MemoryProvider;
constructor(provider: MemoryProvider) {...}
public storeMemory(params: MemoryStorageParams): Promise<VectorMemory> {...}
public getMemoriesByRoleId(roleId: string, type?: MemoryType): Promise<VectorMemory[]> {...}
public getRelevantMemories(roleId: string, query: string, limit?: number): Promise<VectorMemory[]> {...}
public clearMemoriesByRoleId(roleId: string, type?: MemoryType): Promise<boolean> {...}
}
```
### OpenAI Client
The OpenAI Client is responsible for communicating with the OpenAI API.
#### Key Responsibilities
- Generating responses using the OpenAI API
- Creating embeddings for vector search
- Managing API keys and rate limiting
#### Class Structure
```typescript
class OpenAIClient implements AIClient {
private openai: OpenAI;
private model: string;
constructor(apiKey: string, model?: string) {...}
public generateResponse(systemPrompt: string, userPrompt: string): Promise<string> {...}
public createEmbedding(text: string): Promise<number[]> {...}
}
```
## Data Flow
### Processing a Query
1. Client sends a query to the server
2. Server routes the request to the appropriate handler
3. Handler calls the Role Manager to process the query
4. Role Manager generates a complete system prompt
5. Role Manager calls the OpenAI Client to generate a response
6. OpenAI Client calls the OpenAI API
7. Response is returned to the client
```sequence
Client->Server: POST /process {roleId, query}
Server->RoleManager: processQuery(roleId, query)
RoleManager->ContextManager: getCurrentContext(roleId)
ContextManager-->RoleManager: contextState
RoleManager->MemoryManager: getRelevantMemories(roleId, query)
MemoryManager-->RoleManager: relevantMemories
RoleManager->RoleManager: generateCompletePrompt(roleId, contextState, relevantMemories)
RoleManager->OpenAIClient: generateResponse(systemPrompt, query)
OpenAIClient->OpenAI API: createChatCompletion()
OpenAI API-->OpenAIClient: response
OpenAIClient-->RoleManager: response
RoleManager-->Server: response
Server-->Client: {response}
```
### Context Switching
1. Client sends input to the server
2. Server routes the request to the appropriate handler
3. Handler calls the Context Manager to check for triggers
4. Context Manager checks if the input matches any triggers
5. If a trigger is matched, the Context Manager switches the context
6. Context Manager stores the context change in memory
7. New context is returned to the client
```sequence
Client->Server: POST /process {roleId, query}
Server->ContextManager: checkInputForTriggers(roleId, query)
ContextManager->ContextManager: matchTriggers(query)
Note right of ContextManager: If trigger matched
ContextManager->ContextManager: switchContext(roleId, contextType, contextValue)
ContextManager->MemoryManager: storeMemory(contextChange)
MemoryManager-->ContextManager: storedMemory
ContextManager-->Server: contextSwitchResponse
Server-->Client: {contextSwitch}
```
## Configuration
The system is configured through the `config.ts` file and environment variables.
### Environment Variables
- `OPENAI_API_KEY`: API key for OpenAI
- `OPENAI_MODEL`: Model to use for generating responses (default: gpt-4o-mini)
- `PORT`: Port for the HTTP server (default: 3000)
- `SUPABASE_URL`: URL for Supabase (if using Supabase provider)
- `SUPABASE_KEY`: API key for Supabase (if using Supabase provider)
### Configuration File
The `config.ts` file contains configuration for:
- Default roles
- Tone profiles
- Context trigger patterns
- Memory TTL settings
- OpenAI model settings
## Extensibility
The system is designed to be extensible in several ways:
### Adding New Roles
New roles can be added by creating new role definitions in the configuration or through the API.
### Adding New Tone Profiles
New tone profiles can be added by updating the tone profiles in the configuration.
### Adding New Context Triggers
New context triggers can be added through the API or by updating the configuration.
### Adding New Memory Providers
New memory providers can be implemented by creating a class that implements the `MemoryProvider` interface.
### Adding New AI Clients
New AI clients can be implemented by creating a class that implements the `AIClient` interface.
## Security Considerations
### API Key Management
API keys are stored in environment variables and not exposed to clients.
### Input Validation
All input is validated before processing to prevent injection attacks.
### Rate Limiting
Rate limiting should be implemented to prevent abuse.
### Authentication and Authorization
Authentication and authorization should be implemented for production use.
## Performance Considerations
### Memory Usage
Memories are stored with TTL to prevent excessive memory usage.
### Caching
Responses can be cached to improve performance.
### Parallel Processing
Queries can be processed in parallel to improve throughput.
## Future Enhancements
### User Authentication
Implement user authentication to secure the API.
### Role Permissions
Implement role-based access control for different users.
### Streaming Responses
Implement streaming responses for better user experience.
### Multi-Modal Support
Enhance multi-modal context handling for images and other media.
### Federated Learning
Implement federated learning for improved memory management.
## Conclusion
The Role-Context MCP architecture is designed to be modular, extensible, and scalable. It provides a solid foundation for building role-based AI assistants with context awareness and memory management.