Skip to main content
Glama

tunnel_history

Analyze engagement history with a domain to view conversation patterns, thinking stage distribution, and cognitive evolution over time.

Instructions

Meta-view of your engagement with a domain over time. Shows total conversations, thinking stage distribution, importance peaks, and cognitive patterns.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
domainYes

Implementation Reference

  • Implementation of the 'tunnel_history' tool, which provides a meta-view of engagement with a domain over time using either summaries or raw conversation data.
    def tunnel_history(domain: str) -> str:
        """
        Meta-view of your engagement with a domain over time.
        Shows total conversations, thinking stage distribution, importance peaks,
        and cognitive patterns.
        """
        db = get_summaries_db()
        if db:
            rows = db.execute("""
                SELECT thinking_stage, importance, emotional_tone,
                       cognitive_pattern, problem_solving_approach, concepts, source
                FROM summaries WHERE domain_primary = ?
            """, [domain]).fetchall()
    
            if not rows:
                return f"No conversations found for domain: {domain}"
    
            cols = [
                "thinking_stage", "importance", "emotional_tone",
                "cognitive_pattern", "problem_solving_approach", "concepts", "source",
            ]
    
            stage_counts, imp_counts, tone_counts = {}, {}, {}
            pattern_counts, approach_counts, source_counts = {}, {}, {}
            all_concepts = {}
    
            for row in rows:
                r = dict(zip(cols, row))
                s = r["thinking_stage"] or "unknown"
                stage_counts[s] = stage_counts.get(s, 0) + 1
                i = r["importance"] or "routine"
                imp_counts[i] = imp_counts.get(i, 0) + 1
                t = r["emotional_tone"] or ""
                if t:
                    tone_counts[t] = tone_counts.get(t, 0) + 1
                p = r["cognitive_pattern"] or ""
                if p:
                    pattern_counts[p] = pattern_counts.get(p, 0) + 1
                a = r["problem_solving_approach"] or ""
                if a:
                    approach_counts[a] = approach_counts.get(a, 0) + 1
                src = r["source"] or ""
                if src:
                    source_counts[src] = source_counts.get(src, 0) + 1
                for c in parse_json_field(r["concepts"]):
                    all_concepts[c] = all_concepts.get(c, 0) + 1
    
            output = [f"## 📊 Tunnel History: {domain}\n"]
            output.append(f"**Total conversations:** {len(rows)}")
            bt = imp_counts.get("breakthrough", 0)
            sig = imp_counts.get("significant", 0)
            output.append(f"**Importance:** {bt} breakthrough, {sig} significant, {imp_counts.get('routine', 0)} routine")
    
            output.append(f"\n### Thinking Stages")
            for s, c in sorted(stage_counts.items(), key=lambda x: -x[1]):
                pct = c / len(rows) * 100
                bar = "█" * int(pct / 5)
                output.append(f"  {s}: {c} ({pct:.0f}%) {bar}")
    
            if source_counts:
                output.append(f"\n### Sources")
                for s, c in sorted(source_counts.items(), key=lambda x: -x[1]):
                    output.append(f"  {s}: {c}")
    
            if pattern_counts:
                output.append(f"\n### Cognitive Patterns")
                for p, c in sorted(pattern_counts.items(), key=lambda x: -x[1])[:7]:
                    output.append(f"  {p}: {c}")
    
            if approach_counts:
                output.append(f"\n### Problem Solving Approaches")
                for a, c in sorted(approach_counts.items(), key=lambda x: -x[1])[:7]:
                    output.append(f"  {a}: {c}")
    
            if tone_counts:
                output.append(f"\n### Emotional Tones")
                for t, c in sorted(tone_counts.items(), key=lambda x: -x[1])[:5]:
                    output.append(f"  {t}: {c}")
    
            if all_concepts:
                output.append(f"\n### Top Concepts ({len(all_concepts)} total)")
                for c, n in sorted(all_concepts.items(), key=lambda x: -x[1])[:10]:
                    output.append(f"  {c}: {n}")
    
            return "\n".join(output)
  • Registration of the 'tunnel_history' tool in the dashboard registry.
        "name": "tunnel_history",
        "description": "Engagement meta-view over time for a domain",
        "category": "Cognitive Prosthetic",
        "requires": ["summaries"],
        "params": {"domain": "str"},
        "probe": {"domain": "ai-dev"},
    },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mordechaipotash/brain-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server