Uses OpenAI's API to generate AI-powered summaries of chat history, compressing conversations while preserving context using models like gpt-4o-mini.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@SlimContext MCP Serversummarize this long chat history to fit within 4000 tokens"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
SlimContext MCP Server
A Model Context Protocol (MCP) server that wraps the SlimContext library, providing AI chat history compression tools for MCP-compatible clients.
Overview
SlimContext MCP Server exposes two powerful compression strategies as MCP tools:
trim_messages- Token-based compression that removes oldest messages when exceeding token thresholdssummarize_messages- AI-powered compression using OpenAI to create concise summaries
Installation
Development
Configuration
MCP Client Setup
Add to your MCP client configuration:
Environment Variables
OPENAI_API_KEY: OpenAI API key for summarization (optional, can be passed as tool parameter)
Tools
trim_messages
Compresses chat history using token-based trimming strategy.
Parameters:
messages(required): Array of chat messagesmaxModelTokens(optional): Maximum model token context window (default: 8192)thresholdPercent(optional): Percentage threshold to trigger compression 0-1 (default: 0.7)minRecentMessages(optional): Minimum recent messages to preserve (default: 2)
Example:
Response:
summarize_messages
Compresses chat history using AI-powered summarization strategy.
Parameters:
messages(required): Array of chat messagesmaxModelTokens(optional): Maximum model token context window (default: 8192)thresholdPercent(optional): Percentage threshold to trigger compression 0-1 (default: 0.7)minRecentMessages(optional): Minimum recent messages to preserve (default: 4)openaiApiKey(optional): OpenAI API key (can also use OPENAI_API_KEY env var)openaiModel(optional): OpenAI model for summarization (default: 'gpt-4o-mini')customPrompt(optional): Custom summarization prompt
Example:
Response:
Message Format
Both tools expect messages in SlimContext format:
Error Handling
All tools return structured error responses:
Common error scenarios:
Missing OpenAI API key for summarization
Invalid message format
OpenAI API rate limits or errors
Invalid parameter values
Token Estimation
SlimContext uses a simple heuristic for token estimation: Math.ceil(content.length / 4) + 2. This provides a reasonable approximation for most use cases. For more accurate token counting, you would need to implement a custom token estimator in your client application.
Compression Strategies
Trimming Strategy
Preserves all system messages
Preserves the most recent N messages
Removes oldest non-system messages until under token threshold
Fast and deterministic
No external API dependencies
Summarization Strategy
Preserves all system messages
Preserves the most recent N messages
Summarizes middle portion of conversation using AI
Creates contextually rich summaries
Requires OpenAI API access
License
MIT
Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests for new functionality
Submit a pull request
Related
SlimContext - The underlying compression library
Model Context Protocol - The protocol specification
MCP SDK - TypeScript SDK for MCP