Skip to main content
Glama
RalphLi213
by RalphLi213

summarize_large_chat

Summarize extensive chat conversations by breaking them into manageable chunks, creating individual summaries, and generating a comprehensive master summary for easier review.

Instructions

Handle extremely large chat histories by chunking them into manageable pieces. Each chunk gets its own summary, then creates a master summary.

Args: chat_history: The large chat conversation text to summarize title: Optional title for the summary chunk_size: Size of each chunk in characters (default: 50,000) overlap: Overlap between chunks in characters (default: 5,000)

Returns: Information about the chunked summaries created

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
chat_historyYes
titleNo
chunk_sizeNo
overlapNo

Implementation Reference

  • main.py:327-470 (handler)
    The handler function for the 'summarize_large_chat' MCP tool. It chunks large chat histories into overlapping segments, saves each chunk to a separate Markdown file, and creates a master summary file listing all chunks. Includes input validation, error handling, and detailed file generation with metadata.
    def summarize_large_chat(
        chat_history: str,
        title: str = None,
        chunk_size: int = 50000,
        overlap: int = 5000
    ) -> str:
        """
        Handle extremely large chat histories by chunking them into manageable pieces.
        Each chunk gets its own summary, then creates a master summary.
        
        Args:
            chat_history: The large chat conversation text to summarize
            title: Optional title for the summary
            chunk_size: Size of each chunk in characters (default: 50,000)
            overlap: Overlap between chunks in characters (default: 5,000)
        
        Returns:
            Information about the chunked summaries created
        """
        try:
            if not ensure_notes_directory():
                return "Error: Could not create or access notes directory"
            
            history_size = len(chat_history)
            if history_size <= chunk_size:
                return f"Chat history ({history_size:,} chars) is small enough for regular summarization. Use summarize_chat instead."
            
            # Calculate chunks
            chunks = []
            start = 0
            chunk_num = 1
            
            while start < len(chat_history):
                end = min(start + chunk_size, len(chat_history))
                chunk_text = chat_history[start:end]
                chunks.append({
                    'number': chunk_num,
                    'text': chunk_text,
                    'start': start,
                    'end': end,
                    'size': len(chunk_text)
                })
                start = end - overlap  # Overlap to maintain context
                chunk_num += 1
            
            # Create individual chunk summaries
            timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
            clean_title = title.replace(' ', '_')[:30] if title else "Large_Chat"
            
            chunk_summaries = []
            chunk_files = []
            
            for chunk in chunks:
                chunk_filename = f"chat_chunk_{timestamp}_{clean_title}_part{chunk['number']:02d}.md"
                chunk_filepath = NOTES_DIR / chunk_filename
                
                chunk_content = f"""# Chat Chunk {chunk['number']}: {title or 'Large Chat'}
    
    **Date:** {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}
    **Chunk:** {chunk['number']} of {len(chunks)}
    **Characters:** {chunk['start']:,} - {chunk['end']:,} ({chunk['size']:,} chars)
    **Total Size:** {history_size:,} characters
    
    ## Chunk Summary
    
    This is part {chunk['number']} of a large conversation that was split into {len(chunks)} chunks for processing.
    
    ### Key Points from This Chunk
    - Chunk size: {chunk['size']:,} characters
    - Position in conversation: {(chunk['start']/history_size)*100:.1f}% - {(chunk['end']/history_size)*100:.1f}%
    
    ## Chunk Content
    
    ```
    {chunk['text']}
    ```
    
    ---
    *Generated by Chat History Summarizer MCP Server - Large Chat Handler*
    """
                
                with open(chunk_filepath, 'w', encoding='utf-8') as f:
                    f.write(chunk_content)
                
                chunk_files.append(chunk_filename)
                chunk_summaries.append(f"Chunk {chunk['number']}: {chunk['size']:,} chars ({chunk['start']:,}-{chunk['end']:,})")
            
            # Create master summary file
            master_filename = f"chat_master_{timestamp}_{clean_title}.md"
            master_filepath = NOTES_DIR / master_filename
            
            master_content = f"""# Master Summary: {title or 'Large Chat'}
    
    **Date:** {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}
    **Total Size:** {history_size:,} characters ({history_size/(1024*1024):.2f} MB)
    **Chunks Created:** {len(chunks)}
    **Chunk Size:** {chunk_size:,} characters
    **Overlap:** {overlap:,} characters
    
    ## Overview
    
    This large conversation was automatically split into {len(chunks)} chunks for better handling and processing.
    
    ## Chunk Breakdown
    
    {chr(10).join(f"- **{summary}**" for summary in chunk_summaries)}
    
    ## Chunk Files Created
    
    {chr(10).join(f"- `{filename}`" for filename in chunk_files)}
    
    ## Usage Instructions
    
    1. **Read individual chunks** for detailed content
    2. **Search across chunks** to find specific topics
    3. **Use chunk summaries** for quick reference
    4. **Combine insights** from multiple chunks as needed
    
    ## Master Summary
    
    > **Note:** For extremely large conversations, consider reading individual chunks for complete context.
    > This master file provides an overview of the conversation structure.
    
    ---
    *Generated by Chat History Summarizer MCP Server - Large Chat Handler*
    """
            
            with open(master_filepath, 'w', encoding='utf-8') as f:
                f.write(master_content)
            
            return f"""Large chat processed successfully!
    
    **Master Summary:** {master_filepath}
    **Total Size:** {history_size:,} characters ({history_size/(1024*1024):.2f} MB)
    **Chunks Created:** {len(chunks)}
    
    **Files Created:**
    - Master: {master_filename}
    {chr(10).join(f"- Chunk {i+1}: {filename}" for i, filename in enumerate(chunk_files))}
    
    All chunks preserve the complete original conversation with overlapping context for continuity."""
            
        except Exception as e:
            return f"Error processing large chat: {str(e)}"

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RalphLi213/ide-chat-summarizer-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server