Role-Specific Context MCP Server
by Chris-June
Verified
# Role-Context MCP: Complete Documentation
## Table of Contents
1. [Introduction](#introduction)
2. [What is MCP?](#what-is-mcp)
3. [Understanding the Role-Context MCP](#understanding-the-role-context-mcp)
4. [Key Components](#key-components)
5. [Getting Started](#getting-started)
6. [Using the MCP Server](#using-the-mcp-server)
7. [Using the HTTP API](#using-the-http-api)
8. [Advanced Usage](#advanced-usage)
9. [Troubleshooting](#troubleshooting)
10. [FAQ](#faq)
## Introduction
Welcome to the Role-Context MCP documentation! This guide is designed to help beginners understand and use our Model Context Protocol (MCP) server for role-based AI interactions. Whether you're new to AI or just new to our system, this documentation will walk you through everything you need to know.
## What is MCP?
MCP (Model Context Protocol) is a standardized way for applications to interact with AI models. It provides a consistent interface for sending requests to AI models and receiving responses, regardless of the underlying model provider.
Think of MCP as a universal translator between your application and various AI models. Instead of having to learn different APIs for different AI providers, MCP gives you a single, consistent way to communicate with AI models.
### Key Benefits of MCP
- **Standardization**: Use the same code to interact with different AI models
- **Extensibility**: Add new capabilities through tools and resources
- **Context Management**: Maintain and control the context of AI conversations
- **Interoperability**: Easily switch between different AI providers
## Understanding the Role-Context MCP
Our Role-Context MCP is a specialized implementation that focuses on role-based interactions with AI models. It allows you to define different "roles" for your AI assistant, each with its own expertise, tone, and memory.
### What Makes Our MCP Special
1. **Role-Based Interactions**: Define different expert roles (like "Marketing Expert" or "Songwriter") for your AI
2. **Contextual Memory**: Each role maintains its own memory, preventing context bleed between different domains
3. **Dynamic Tone Adjustment**: Change how the AI communicates (professional, creative, technical, etc.)
4. **Real-Time Context Switching**: Switch between different contexts based on triggers or explicit requests
## Key Components
The Role-Context MCP consists of several key components that work together:
### 1. Role Manager
The Role Manager handles the creation, updating, and deletion of roles. Each role has:
- **ID**: A unique identifier (e.g., "marketing-expert")
- **Name**: A human-readable name (e.g., "Marketing Expert")
- **Description**: What the role specializes in
- **Instructions**: Specific guidance for the role
- **Domains**: Areas of expertise (e.g., ["marketing", "advertising"])
- **Tone**: The communication style (e.g., "professional", "creative")
- **System Prompt**: The base instructions for the AI model
### 2. Memory Manager
The Memory Manager stores and retrieves memories for each role. Memories are categorized as:
- **Session Memories**: Short-term memories for the current conversation (TTL: 1 hour)
- **User Memories**: Medium-term memories about user preferences (TTL: 30 days)
- **Knowledge Memories**: Long-term factual information (TTL: 1 year)
Memories are stored with vector embeddings, allowing for semantic search to find relevant memories for each query.
### 3. Context Manager
The Context Manager handles real-time context switching based on different factors:
- **Tone Context**: How the AI should communicate
- **Task Context**: What the AI is currently focused on
- **Domain Context**: Which knowledge domain to prioritize
- **User Context**: User-specific preferences and information
- **Environment Context**: Situational factors like time or location
- **Multimodal Context**: Context from images or other non-text sources
### 4. OpenAI Client
The OpenAI Client handles communication with the OpenAI API, including:
- **Generating Responses**: Using the GPT-4o-mini model
- **Creating Embeddings**: For vector search of memories
## Getting Started
### Prerequisites
- Node.js 18 or higher
- npm or yarn
- OpenAI API key
### Installation
1. Clone the repository:
```bash
git clone https://github.com/yourusername/role-context-mcp.git
cd role-context-mcp
```
2. Install dependencies:
```bash
npm install
```
3. Create a `.env` file with your OpenAI API key:
```
OPENAI_API_KEY=your_api_key_here
OPENAI_MODEL=gpt-4o-mini
PORT=3000
```
4. Build the project:
```bash
npm run build
```
### Running the Server
You can run the server in two modes:
#### MCP Server Mode
This mode is for integrating with MCP clients:
```bash
npm run start:mcp
```
#### HTTP Server Mode
This mode exposes a REST API for standard HTTP requests:
```bash
npm run start:http
```
For development with auto-restart:
```bash
npm run dev:http
```
## Using the MCP Server
To use the MCP server, you'll need an MCP client. The SDK provides a client that you can use.
### Basic Example
```typescript
import { createClient } from '@modelcontextprotocol/sdk/client';
async function main() {
// Create MCP client
const client = await createClient({
transport: 'stdio',
serverCommand: 'npm run start:mcp',
serverCwd: process.cwd(),
});
// Process a query with a specific role
const response = await client.executeToolRequest({
name: 'process-with-role',
parameters: {
roleId: 'marketing-expert',
query: 'How can I improve my social media engagement?'
}
});
console.log('Response:', response);
// Close the client
await client.close();
}
main();
```
### Available Tools
The MCP server provides several tools:
#### Role Management Tools
- **process-with-role**: Process a query using a specific role
- **create-role**: Create a new custom role
- **update-role**: Update an existing role
- **delete-role**: Delete a custom role
- **change-role-tone**: Change a role's tone
#### Memory Management Tools
- **store-memory**: Store a memory for a role
- **get-memories**: Get memories for a role
- **clear-memories**: Clear memories for a role
#### Context Management Tools
- **switch-context**: Switch context for an agent
- **get-current-context**: Get current context for an agent
- **get-context-history**: Get context history for an agent
- **add-context-trigger**: Add a new context trigger
- **update-context-trigger**: Update an existing context trigger
- **delete-context-trigger**: Delete a context trigger
- **check-input-for-triggers**: Check input for context triggers
- **handle-multi-modal-context**: Handle multi-modal context
## Using the HTTP API
The HTTP API provides a RESTful interface for interacting with the MCP server.
### Endpoints
#### Health Check
```
GET /health
```
Returns the status of the server.
#### Role Management
```
GET /roles
```
Returns all available roles.
```
GET /roles/:roleId
```
Returns a specific role.
```
POST /roles
```
Creates a new role. Example request body:
```json
{
"id": "tech-writer",
"name": "Technical Writer",
"description": "Specializes in clear, concise technical documentation",
"instructions": "Create documentation that is accessible to both technical and non-technical audiences",
"domains": ["technical-writing", "documentation", "tutorials"],
"tone": "technical",
"systemPrompt": "You are an experienced technical writer with expertise in creating clear, concise documentation for complex systems."
}
```
```
PATCH /roles/:roleId
```
Updates an existing role. Example request body:
```json
{
"tone": "casual",
"instructions": "Updated instructions here"
}
```
```
DELETE /roles/:roleId
```
Deletes a custom role.
#### Processing Queries
```
POST /process
```
Processes a query using a specific role. Example request body:
```json
{
"roleId": "marketing-expert",
"query": "How can I improve my social media engagement?",
"customInstructions": "Focus on B2B strategies"
}
```
#### Tone Profiles
```
GET /tones
```
Returns all available tone profiles.
### Example: Calling the API from JavaScript
```javascript
// Using fetch API in a browser
async function processQuery() {
const response = await fetch('http://localhost:3000/process', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
roleId: 'marketing-expert',
query: 'How can I improve my social media engagement?'
}),
});
const data = await response.json();
console.log(data.response);
}
// Using axios in Node.js
import axios from 'axios';
async function processQuery() {
const response = await axios.post('http://localhost:3000/process', {
roleId: 'marketing-expert',
query: 'How can I improve my social media engagement?'
});
console.log(response.data.response);
}
```
### Example: React Component
```jsx
import React, { useState, useEffect } from 'react';
import axios from 'axios';
function RoleBasedChat() {
const [roles, setRoles] = useState([]);
const [selectedRole, setSelectedRole] = useState('');
const [query, setQuery] = useState('');
const [response, setResponse] = useState('');
const [loading, setLoading] = useState(false);
useEffect(() => {
// Fetch available roles when component mounts
async function fetchRoles() {
try {
const response = await axios.get('http://localhost:3000/roles');
setRoles(response.data.roles);
if (response.data.roles.length > 0) {
setSelectedRole(response.data.roles[0].id);
}
} catch (error) {
console.error('Error fetching roles:', error);
}
}
fetchRoles();
}, []);
async function handleSubmit(e) {
e.preventDefault();
setLoading(true);
try {
const response = await axios.post('http://localhost:3000/process', {
roleId: selectedRole,
query: query
});
setResponse(response.data.response);
} catch (error) {
console.error('Error processing query:', error);
setResponse('Error: ' + error.message);
} finally {
setLoading(false);
}
}
return (
<div className="chat-container">
<h1>Role-Based AI Assistant</h1>
<div className="role-selector">
<label>Select Role:</label>
<select
value={selectedRole}
onChange={(e) => setSelectedRole(e.target.value)}
>
{roles.map(role => (
<option key={role.id} value={role.id}>
{role.name}
</option>
))}
</select>
</div>
<form onSubmit={handleSubmit}>
<textarea
value={query}
onChange={(e) => setQuery(e.target.value)}
placeholder="Type your question here..."
rows={4}
/>
<button type="submit" disabled={loading || !selectedRole || !query}>
{loading ? 'Processing...' : 'Submit'}
</button>
</form>
{response && (
<div className="response">
<h2>Response:</h2>
<div className="response-content">{response}</div>
</div>
)}
</div>
);
}
export default RoleBasedChat;
```
## Advanced Usage
### Custom Roles
You can create custom roles with specific expertise domains, tones, and instructions:
```javascript
const response = await axios.post('http://localhost:3000/roles', {
id: "financial-advisor",
name: "Financial Advisor",
description: "Provides financial planning and investment advice",
instructions: "Give balanced financial advice considering risk tolerance and long-term goals",
domains: ["finance", "investing", "retirement", "taxes"],
tone: "professional",
systemPrompt: "You are a certified financial planner with 15+ years of experience helping clients achieve their financial goals."
});
```
### Context Switching
You can dynamically switch contexts using the MCP client:
```javascript
const response = await client.executeToolRequest({
name: 'switch-context',
parameters: {
agentId: 'marketing-expert',
contextType: 'tone',
contextValue: 'witty',
priority: 'high'
}
});
```
### Memory Management
You can store and retrieve memories for specific roles:
```javascript
// Store a memory
const storeResponse = await client.executeToolRequest({
name: 'store-memory',
parameters: {
roleId: 'marketing-expert',
content: 'The user prefers Instagram over TikTok for their business',
type: 'user',
importance: 'medium'
}
});
// Process a query that will use relevant memories
const queryResponse = await client.executeToolRequest({
name: 'process-with-role',
parameters: {
roleId: 'marketing-expert',
query: 'What social media platform should I focus on?'
}
});
```
## Troubleshooting
### Common Issues
#### "Connection refused" error
**Problem**: You're getting a "Connection refused" error when trying to connect to the HTTP server.
**Solution**: Make sure the server is running and you're using the correct port. By default, the server runs on port 3000, but this can be changed in the .env file.
#### "Role not found" error
**Problem**: You're getting a "Role not found" error when trying to process a query.
**Solution**: Make sure you're using a valid role ID. You can get a list of available roles by calling `GET /roles`.
#### Slow response times
**Problem**: The server is taking a long time to respond to queries.
**Solution**: This could be due to the OpenAI API being slow or rate-limited. Try using a different model or increasing the timeout in your client.
### Debugging
To enable more detailed logging, you can set the `DEBUG` environment variable:
```bash
DEBUG=role-context-mcp:* npm run start:http
```
## FAQ
### What is the difference between the MCP server and the HTTP server?
The MCP server uses the Model Context Protocol to communicate with MCP clients, while the HTTP server exposes a RESTful API that can be called using standard HTTP requests. The HTTP server is easier to integrate with existing web applications, while the MCP server provides more advanced features like streaming responses and bidirectional communication.
### Can I use a different AI model?
Yes, you can change the AI model by modifying the `OPENAI_MODEL` environment variable in your `.env` file. The default is `gpt-4o-mini`, but you can use any model supported by the OpenAI API.
### How do I add more default roles?
You can add more default roles by modifying the `config.ts` file. Look for the `roles.defaults` array and add your new roles there.
### How long are memories stored?
Memories have different time-to-live (TTL) values depending on their type:
- Session memories: 1 hour
- User memories: 30 days
- Knowledge memories: 1 year
You can customize these values in the `config.ts` file.
### Can I use this with a database?
Yes, the memory provider is designed to be extensible. The default implementation uses in-memory storage, but there's also a Supabase provider included. You can implement your own provider to use any database you prefer.
### How do I deploy this to production?
For production deployment, we recommend:
1. Building the project with `npm run build`
2. Using a process manager like PM2 to run the server
3. Setting up a reverse proxy like Nginx to handle HTTPS and load balancing
4. Using environment variables for configuration
```bash
# Example production start command with PM2
pm2 start npm -- start:http
```