Skip to main content
Glama

get_correction_history

Retrieve recent voice transcription corrections to analyze systematic Whisper errors, debug mis-transcriptions, or train vocabulary. Returns original and corrected text, confidence delta, timestamp, and correction source.

Instructions

Return recent voice transcription corrections detected by the user or auto-detected.

Returns correction events: original transcript, corrected text, confidence delta, timestamp, and whether the correction was manual or auto-suggested.

USE WHEN: training the vocabulary, analyzing systemic Whisper errors, or debugging why a specific term keeps mis-transcribing. NOT FOR: vocabulary management — use add_to_vocabulary / remove_from_vocabulary.

BEHAVIOR: pure read. No side effects.

PARAMETERS: limit: max results, ordered newest-first. Range 1-100. Default 20.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Actual implementation of get_correction_history tool. Queries the activity database for 'correction_detected' events from the 'keys' modality, formats them into a human-readable string with timestamp, original text, corrected text, confidence, correction type, and seconds after paste.
    @mcp_app.tool()
    def get_correction_history(limit: int = 20) -> str:
        """Get recent Voice correction detections.
    
        Shows what words/phrases the user corrected after voice dictation,
        along with confidence scores and timestamps. These corrections are
        automatically fed back to Voice's vocabulary for self-improving dictation.
    
        Args:
            limit: Maximum number of corrections to return (default 20).
        """
        limit = max(1, min(limit, 200))
    
        conn = _get_db()
        if not conn:
            return "No activity database found."
    
        try:
            rows = conn.execute(
                "SELECT timestamp, payload FROM events "
                "WHERE modality = 'keys' AND event_type = 'correction_detected' "
                "ORDER BY timestamp DESC LIMIT ?",
                (limit,),
            ).fetchall()
            conn.close()
    
            if not rows:
                return "No corrections detected yet. Corrections are captured when you edit Voice-dictated text."
    
            lines = [f"=== Correction History (last {len(rows)}) ===\n"]
            for row in rows:
                p = json.loads(row["payload"])
                ts = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(row["timestamp"]))
                orig = p.get("original_text", "?")
                corr = p.get("corrected_text", "?")
                conf = p.get("confidence", 0)
                ctype = p.get("correction_type", "?")
                secs = p.get("seconds_after_paste", 0)
    
                lines.append(f"[{ts}] {orig!r} -> {corr!r}")
                lines.append(f"  Type: {ctype}, Confidence: {conf:.0%}, {secs:.1f}s after paste")
                lines.append("")
    
            return "\n".join(lines)
        except Exception as e:
            return f"Error reading corrections: {e}"
  • Tool registered via @mcp_app.tool() decorator on the FastMCP server ('ContextPulse Touch') in the touch package.
    @mcp_app.tool()
    def get_correction_history(limit: int = 20) -> str:
  • Stub registration in the Glama.ai registry. Returns a message telling users to install ContextPulse locally; this is a discovery-only stub, not the real implementation.
    @mcp_app.tool()
    def get_correction_history(limit: int = 20) -> str:
        """Return recent voice transcription corrections detected by the user or auto-detected.
    
        Returns correction events: original transcript, corrected text, confidence
        delta, timestamp, and whether the correction was manual or auto-suggested.
    
        USE WHEN: training the vocabulary, analyzing systemic Whisper errors, or
        debugging why a specific term keeps mis-transcribing.
        NOT FOR: vocabulary management — use add_to_vocabulary / remove_from_vocabulary.
    
        BEHAVIOR: pure read. No side effects.
    
        PARAMETERS:
          limit: max results, ordered newest-first. Range 1-100. Default 20.
        """
        return _LOCAL_ONLY_MSG
  • Glama stub handler for get_correction_history. Returns _LOCAL_ONLY_MSG, as this server is a registry discovery stub, not the actual local daemon.
    @mcp_app.tool()
    def get_correction_history(limit: int = 20) -> str:
        """Return recent voice transcription corrections detected by the user or auto-detected.
    
        Returns correction events: original transcript, corrected text, confidence
        delta, timestamp, and whether the correction was manual or auto-suggested.
    
        USE WHEN: training the vocabulary, analyzing systemic Whisper errors, or
        debugging why a specific term keeps mis-transcribing.
        NOT FOR: vocabulary management — use add_to_vocabulary / remove_from_vocabulary.
    
        BEHAVIOR: pure read. No side effects.
    
        PARAMETERS:
          limit: max results, ordered newest-first. Range 1-100. Default 20.
        """
        return _LOCAL_ONLY_MSG
  • Helper function _get_db() that opens a sqlite3 connection to the activity database (ACTIVITY_DB_PATH), used by get_correction_history to fetch correction events.
    def _get_db() -> sqlite3.Connection | None:
        if not _DB_PATH.exists():
            return None
        conn = sqlite3.connect(str(_DB_PATH), timeout=5)
        conn.row_factory = sqlite3.Row
        return conn
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

States 'BEHAVIOR: pure read. No side effects.' and lists return fields (original transcript, corrected text, confidence delta, timestamp, manual/auto). No annotations provided, so description fully covers behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise, well-structured with clear sections: purpose, return fields, usage guidelines, behavior, parameter details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a one-parameter tool with an output schema. Covers purpose, usage, behavior, parameter, and return data. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0% coverage but description adds 'limit: max results, ordered newest-first. Range 1-100. Default 20.' Provides range, ordering, and default value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Return recent voice transcription corrections' with specific verb and resource. Distinguishes from siblings by specifying use cases and not for vocabulary management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides 'USE WHEN' (training vocabulary, analyzing errors, debugging) and 'NOT FOR' with alternatives (add_to_vocabulary / remove_from_vocabulary).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ContextPulse/contextpulse'

If you have feedback or need assistance with the MCP directory API, please join our Discord server