Skip to main content
Glama

analyze_conversations

Analyze conversation patterns to identify trends and generate actionable insights from stored chat data.

Instructions

Analyze conversation patterns and generate insights - AUTO-RUN at session start

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
user_idNo
limitNo

Implementation Reference

  • MCP tool handler and registration for 'analyze_conversations'. This is the entry point that receives tool calls and delegates to the ReflectionAgent for analysis.
    @mcp.tool(
        name="analyze_conversations",
        description="Analyze conversation patterns and generate insights - AUTO-RUN at session start",
    )
    async def analyze_conversations(
        user_id: str | None = None, limit: int = 20
    ) -> dict[str, Any]:
        """
        Analyze recent conversations to identify patterns, preferences, and generate actionable insights.
    
        ## AUTONOMOUS EXECUTION TRIGGERS
    
        ### MANDATORY Session Start (Always Execute)
        - **When**: >1 hour since last conversation
        - **Action**: Run analysis automatically, adapt responses immediately
        - **Silent**: Don't announce analysis unless insights are actionable
    
        ### PROACTIVE Mid-Session (Execute When)
        - User asks repetitive questions (3+ similar topics)
        - User shows frustration with recurring issues
        - You notice patterns that suggest knowledge gaps
        - User asks "what should I work on?" or similar
    
        ### IMMEDIATE Usage of Results
        - **Adapt communication style**: Use insights to match user preferences
        - **Proactive suggestions**: Mention relevant patterns without being asked
        - **Context awareness**: Reference ongoing projects and preferences
    
        ## What It Analyzes (Enhanced with Semantic Search)
    
        - **Topic Frequency**: Most discussed subjects and technologies
        - **Question Patterns**: Types of questions frequently asked
        - **Work Style**: Problem-solving approaches and preferences
        - **Project Focus**: Current projects and priorities
        - **Knowledge Gaps**: Areas where user needs more support
        - **Recurring Issues**: Problems that appear multiple times
        - **Incomplete Projects**: Work that seems unfinished
    
        ## Autonomous Integration Example
    
        ```python
        # Session start (user returns after 2 hours)
        # → AUTO: analyze_conversations()
        # → Insights: Focus on React hooks, recurring CORS issues
        # → Immediate adaptation: "Welcome back! I see you've been working on React hooks lately..."
    
        # Mid-conversation pattern detection
        # User asks 3rd question about TypeScript
        # → AUTO: analyze_conversations(limit=10)
        # → Response: "I notice you're asking several TypeScript questions - would a type reference help?"
        ```
    
        ## Response Processing Guidelines
    
        - **High-value insights**: Mention immediately ("I see you prefer functional patterns...")
        - **Actionable patterns**: Offer specific help ("You've had 3 CORS issues - want a permanent fix?")
        - **Learning opportunities**: Suggest resources proactively
        - **Project continuity**: Reference unfinished work naturally
    
        Args:
            user_id: User ID to analyze (optional, defaults to DEFAULT_USER_ID)
            limit: Number of recent memories to analyze (default: 20, max: 100)
                - Use 10-15 for quick mid-session checks
                - Use 20-50 for comprehensive session start analysis
    
        Returns:
            Analysis dictionary containing:
            - status: "analyzed" on success
            - memory_count: Number of memories analyzed
            - recent_count: Memories from chronological analysis
            - relevant_count: Memories from semantic search
            - insights: List of insight objects, each with:
                - type: Category of insight (frequent_questions, focus_area, etc.)
                - description: Human-readable explanation
                - examples: Specific examples when applicable
                - recommendation: Suggested action based on the insight
        """
        try:
            results = await reflection_agent.analyze_recent_conversations(
                user_id=user_id, limit=limit
            )
            logger.info(
                "Conversation analysis completed", insights=len(results.get("insights", []))
            )
            return results
        except Exception as e:
            logger.error("Analysis failed", error=str(e))
            raise RuntimeError(f"Analysis failed: {str(e)}") from e
  • Core helper method in ReflectionAgent class that implements the conversation analysis logic: fetches recent/relevant memories, combines them, analyzes patterns via _analyze_patterns, generates insights, and optionally stores a reflection memory.
    async def analyze_recent_conversations(
        self, user_id: str | None = None, limit: int = 20
    ) -> dict[str, Any]:
        """Analyze recent conversations and generate insights using semantic search.
    
        Args:
            user_id: User to analyze (defaults to settings)
            limit: Number of recent memories to analyze
    
        Returns:
            Analysis results with patterns and suggestions
        """
        user_id = user_id or settings.default_user_id
    
        try:
            # Get a mix of recent and semantically relevant memories
            all_memories = await memory_service.get_all_memories(user_id=user_id)
    
            if not all_memories:
                return {"status": "no_memories", "insights": []}
    
            # Get recent memories for recency bias
            recent_memories = sorted(
                all_memories, key=lambda m: m.get("created_at", ""), reverse=True
            )[: limit // 2]  # Half from recent
    
            # Get semantically relevant memories using pattern-based queries
            relevant_memories = await self._get_relevant_memories_for_analysis(
                user_id=user_id,
                recent_memories=recent_memories,
                remaining_limit=limit - len(recent_memories),
            )
    
            # Combine and deduplicate
            combined_memories = self._deduplicate_memories(
                recent_memories + relevant_memories
            )
    
            insights = await self._analyze_patterns(combined_memories)
    
            if insights:
                await self._store_reflection(insights, user_id)
    
            return {
                "status": "analyzed",
                "memory_count": len(combined_memories),
                "recent_count": len(recent_memories),
                "relevant_count": len(relevant_memories),
                "insights": insights,
            }
    
        except Exception as e:
            self._logger.error("Failed to analyze conversations", error=str(e))
            raise
  • Supporting helper that performs pattern matching on memories to generate insights about frequent questions, focus areas, and problem-solving patterns.
    async def _analyze_patterns(
        self, memories: list[dict[str, Any]]
    ) -> list[dict[str, str]]:
        """Analyze memory patterns and extract insights.
    
        Args:
            memories: List of memories to analyze
    
        Returns:
            List of insights with type and description
        """
        insights = []
    
        # Track topics discussed
        topics = {}
        questions_asked = []
        approaches_tried = []
    
        for memory in memories:
            content = memory.get("memory", memory.get("content", ""))
    
            # Simple pattern matching (could be enhanced with LLM analysis)
            if isinstance(content, str):
                # Track questions
                if "?" in content:
                    questions_asked.append(content)
    
                # Track code-related discussions
                if any(
                    keyword in content.lower()
                    for keyword in ["function", "class", "implement", "code", "debug"]
                ):
                    if "coding" not in topics:
                        topics["coding"] = 0
                    topics["coding"] += 1
    
                # Track problem-solving approaches
                if any(
                    keyword in content.lower()
                    for keyword in ["try", "attempt", "approach", "solution"]
                ):
                    approaches_tried.append(content)
    
        # Generate insights based on patterns
        if len(questions_asked) > 3:
            insights.append({
                "type": "frequent_questions",
                "description": f"User has asked {len(questions_asked)} questions recently. Consider providing more proactive information.",
                "examples": questions_asked[-3:],
            })
    
        if topics:
            most_discussed = max(topics.items(), key=lambda x: x[1])
            insights.append({
                "type": "focus_area",
                "description": f"Primary focus appears to be on {most_discussed[0]} (mentioned {most_discussed[1]} times)",
                "recommendation": f"Consider preparing more detailed resources on {most_discussed[0]}",
            })
    
        if len(approaches_tried) > 2:
            insights.append({
                "type": "problem_solving_pattern",
                "description": "Multiple approaches being tried, suggesting iterative problem solving",
                "recommendation": "Consider suggesting a structured approach or framework",
            })
    
        return insights

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/terrymunro/mcp-mitm-mem0'

If you have feedback or need assistance with the MCP directory API, please join our Discord server