analyze_conversations
Analyze conversation patterns to identify trends and generate actionable insights from stored chat data.
Instructions
Analyze conversation patterns and generate insights - AUTO-RUN at session start
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| user_id | No | ||
| limit | No |
Implementation Reference
- mcp_mitm_mem0/mcp_server.py:509-597 (handler)MCP tool registration and handler function for 'analyze_conversations'. Calls reflection_agent for core logic.@mcp.tool( name="analyze_conversations", description="Analyze conversation patterns and generate insights - AUTO-RUN at session start", ) async def analyze_conversations( user_id: str | None = None, limit: int = 20 ) -> dict[str, Any]: """ Analyze recent conversations to identify patterns, preferences, and generate actionable insights. ## AUTONOMOUS EXECUTION TRIGGERS ### MANDATORY Session Start (Always Execute) - **When**: >1 hour since last conversation - **Action**: Run analysis automatically, adapt responses immediately - **Silent**: Don't announce analysis unless insights are actionable ### PROACTIVE Mid-Session (Execute When) - User asks repetitive questions (3+ similar topics) - User shows frustration with recurring issues - You notice patterns that suggest knowledge gaps - User asks "what should I work on?" or similar ### IMMEDIATE Usage of Results - **Adapt communication style**: Use insights to match user preferences - **Proactive suggestions**: Mention relevant patterns without being asked - **Context awareness**: Reference ongoing projects and preferences ## What It Analyzes (Enhanced with Semantic Search) - **Topic Frequency**: Most discussed subjects and technologies - **Question Patterns**: Types of questions frequently asked - **Work Style**: Problem-solving approaches and preferences - **Project Focus**: Current projects and priorities - **Knowledge Gaps**: Areas where user needs more support - **Recurring Issues**: Problems that appear multiple times - **Incomplete Projects**: Work that seems unfinished ## Autonomous Integration Example ```python # Session start (user returns after 2 hours) # → AUTO: analyze_conversations() # → Insights: Focus on React hooks, recurring CORS issues # → Immediate adaptation: "Welcome back! I see you've been working on React hooks lately..." # Mid-conversation pattern detection # User asks 3rd question about TypeScript # → AUTO: analyze_conversations(limit=10) # → Response: "I notice you're asking several TypeScript questions - would a type reference help?" ``` ## Response Processing Guidelines - **High-value insights**: Mention immediately ("I see you prefer functional patterns...") - **Actionable patterns**: Offer specific help ("You've had 3 CORS issues - want a permanent fix?") - **Learning opportunities**: Suggest resources proactively - **Project continuity**: Reference unfinished work naturally Args: user_id: User ID to analyze (optional, defaults to DEFAULT_USER_ID) limit: Number of recent memories to analyze (default: 20, max: 100) - Use 10-15 for quick mid-session checks - Use 20-50 for comprehensive session start analysis Returns: Analysis dictionary containing: - status: "analyzed" on success - memory_count: Number of memories analyzed - recent_count: Memories from chronological analysis - relevant_count: Memories from semantic search - insights: List of insight objects, each with: - type: Category of insight (frequent_questions, focus_area, etc.) - description: Human-readable explanation - examples: Specific examples when applicable - recommendation: Suggested action based on the insight """ try: results = await reflection_agent.analyze_recent_conversations( user_id=user_id, limit=limit ) logger.info( "Conversation analysis completed", insights=len(results.get("insights", [])) ) return results except Exception as e: logger.error("Analysis failed", error=str(e)) raise RuntimeError(f"Analysis failed: {str(e)}") from e
- Core helper method implementing the analysis logic: fetches recent and relevant memories, analyzes patterns, generates insights, and stores reflections.async def analyze_recent_conversations( self, user_id: str | None = None, limit: int = 20 ) -> dict[str, Any]: """Analyze recent conversations and generate insights using semantic search. Args: user_id: User to analyze (defaults to settings) limit: Number of recent memories to analyze Returns: Analysis results with patterns and suggestions """ user_id = user_id or settings.default_user_id try: # Get a mix of recent and semantically relevant memories all_memories = await memory_service.get_all_memories(user_id=user_id) if not all_memories: return {"status": "no_memories", "insights": []} # Get recent memories for recency bias recent_memories = sorted( all_memories, key=lambda m: m.get("created_at", ""), reverse=True )[: limit // 2] # Half from recent # Get semantically relevant memories using pattern-based queries relevant_memories = await self._get_relevant_memories_for_analysis( user_id=user_id, recent_memories=recent_memories, remaining_limit=limit - len(recent_memories), ) # Combine and deduplicate combined_memories = self._deduplicate_memories( recent_memories + relevant_memories ) insights = await self._analyze_patterns(combined_memories) if insights: await self._store_reflection(insights, user_id) return { "status": "analyzed", "memory_count": len(combined_memories), "recent_count": len(recent_memories), "relevant_count": len(relevant_memories), "insights": insights, } except Exception as e: self._logger.error("Failed to analyze conversations", error=str(e)) raise