MCP-summarization-functions
Summarization Functions
Intelligent text summarization for the Model Context Protocol
Features • AI Agent Integration • Installation • Usage
</div>Overview
A powerful MCP server that provides intelligent summarization capabilities through a clean, extensible architecture. Built with modern TypeScript and designed for seamless integration with AI workflows.
Installation
Installing via Smithery
To install Summarization Functions for Claude Desktop automatically via Smithery:
AI Agent Integration
This MCP server was primarily developed to enhance the performance and reliability of AI agents like Roo Cline and Cline. It addresses a critical challenge in AI agent operations: context window management.
Context Window Optimization
AI agents frequently encounter situations where their context window gets rapidly filled with large outputs from:
- Command execution results
- File content readings
- Directory listings
- API responses
- Error messages and stack traces
This server helps maintain efficient context usage by:
- Providing concise, relevant summaries instead of full content
- Storing full content for reference when needed
- Offering focused analysis based on specific needs (security, API surface, etc.)
- Supporting multiple output formats for optimal context utilization
Benefits for AI Agents
- Reduced Failure Rates: By preventing context window overflow
- Improved Response Quality: Through focused, relevant summaries
- Enhanced Efficiency: By maintaining important context while reducing noise
- Better Resource Management: Through intelligent content caching and retrieval
- Flexible Integration: Supporting multiple AI providers and configuration options
Recommended AI Agent Prompt
When integrating with AI agents, include the following in your agent's instructions:
<b>Summarization in action on the Ollama repository (Gemini 2.0 Flash summarization, Claude 3.5 agent)</b>
Features
- Command Output Summarization
Execute commands and get concise summaries of their output - File Content Analysis
Summarize single or multiple files while maintaining technical accuracy - Directory Structure Understanding
Get clear overviews of complex directory structures - Flexible Model Support Use models from different providers
- AI Agent Context Optimization Prevent context window overflow and improve AI agent performance through intelligent summarization
Configuration
The server supports multiple AI providers through environment variables:
Required Environment Variables
PROVIDER
: AI provider to use. Supported values: -ANTHROPIC
- Claude models from Anthropic -OPENAI
- GPT models from OpenAI -OPENAI-COMPATIBLE
- OpenAI-compatible APIs (e.g. Azure) -GOOGLE
- Gemini models from GoogleAPI_KEY
: API key for the selected provider
Optional Environment Variables
MODEL_ID
: Specific model to use (defaults to provider's standard model)PROVIDER_BASE_URL
: Custom API endpoint for OpenAI-compatible providersMAX_TOKENS
: Maximum tokens for model responses (default: 1024)SUMMARIZATION_CHAR_THRESHOLD
: Character count threshold for when to summarize (default: 512)SUMMARIZATION_CACHE_MAX_AGE
: Cache duration in milliseconds (default: 3600000 - 1 hour)MCP_WORKING_DIR
- fallback directory for trying to find files with relative paths from
Example Configurations
Usage
Add the server to your MCP configuration file:
Available Functions
The server provides the following summarization tools:
summarize_command
Execute and summarize command output.
summarize_files
Summarize file contents.
summarize_directory
Get directory structure overview.
summarize_text
Summarize arbitrary text content.
get_full_content
Retrieve the full content for a given summary ID.
License
MIT
This server cannot be installed
Provides intelligent summarization capabilities through a clean, extensible architecture. Mainly built for solving AI agents issues on big repositories, where large files can eat up the context window.