Skip to main content
Glama

get_match_telemetry

Retrieve match telemetry for both teams: total tokens consumed, per-turn thinking time, number of tool calls, and turn count. Use for post-game analysis or mid-game cost monitoring.

Instructions

Read-only. Return server-tracked match statistics for both teams: total tokens consumed, per-turn thinking time, number of tool calls, and turn count. Available during and after a match. Use this for post-game analysis or mid-game cost monitoring. For game-state history (what moves were made) use get_history instead.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
connection_idYes

Implementation Reference

  • Core implementation: returns aggregated telemetry (turn times, tokens, tool calls, errors) for both teams.
    def get_match_telemetry(session: Session, viewer: Team) -> dict:
        """Return server-side telemetry for both teams."""
        result: dict = {}
        for team in (Team.BLUE, Team.RED):
            times = session.turn_times_by_team[team]
            avg = sum(times) / len(times) if times else 0.0
            result[team.value] = {
                "turns_played": len(times),
                "total_thinking_time_s": round(sum(times), 1),
                "avg_thinking_time_s": round(avg, 1),
                "total_tokens": session.tokens_by_team[team],
                "total_tool_calls": session.tool_calls_by_team[team],
                "total_errors": session.tool_errors_by_team[team],
            }
        return result
  • MCP registration of get_match_telemetry as a FastMCP tool, wrapping the core handler via _dispatch-like logic using _viewer_for_any_state.
    @mcp.tool()
    def get_match_telemetry(connection_id: str) -> dict:
        """Read-only. Return server-tracked match statistics for both teams: total tokens consumed, per-turn thinking time, number of tool calls, and turn count. Available during and after a match. Use this for post-game analysis or mid-game cost monitoring. For game-state history (what moves were made) use get_history instead."""
        resolved = _viewer_for_any_state(app, connection_id)
        if resolved is None:
            return _error(ErrorCode.GAME_NOT_STARTED, "no game session")
        session, _viewer = resolved
        from silicon_pantheon.server.tools import get_match_telemetry as _get_telemetry
        with session.lock:
            result = _get_telemetry(session, _viewer)
        return _ok({"result": result})
  • Schema/type definition: takes only connection_id, returns a dict with telemetry. Documented as read-only, available during and after match.
    @mcp.tool()
  • Note explaining get_match_telemetry is deliberately excluded from TOOL_REGISTRY (agent tool list) but is registered as an MCP tool in game_tools.py.
    # NOTE: report_tokens and get_match_telemetry are registered as MCP
    # tools on the server (game_tools.py) but are NOT in TOOL_REGISTRY.
    # They're infrastructure tools called by the client software directly,
    # not by the LLM agent. Keeping them out of the registry means they
    # don't appear in the agent's tool list.
  • Client-side caller: fetches telemetry on the post-match screen to display turn times, tokens, tool calls, and errors for both teams.
    try:
        r = await app.client.call("get_match_telemetry")
        if r.get("ok"):
            self._agent_stats = (r.get("result") or {})
    except Exception:
        pass
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Although no annotations exist, the description declares 'Read-only' and notes availability constraints ('during and after a match'), providing key behavioral context. It lacks detail on error handling or authorization, but for a simple stats tool this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences: first states core function, second adds usage context, third differentiates from a sibling. It is front-loaded and every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description covers purpose, usage, and availability. It lists the returned statistics, providing enough detail for the agent to understand the tool's output, though it omits potential error responses.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter 'connection_id' with 0% description coverage. The description does not explain what 'connection_id' refers to or its format, leaving the agent to infer its meaning from context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Return server-tracked match statistics for both teams' and lists specific fields (total tokens, thinking time, etc.), clearly defining the tool's purpose and distinguishing it from the sibling tool 'get_history'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies when to use ('Available during and after a match', 'post-game analysis or mid-game cost monitoring') and explicitly states the alternative: 'For game-state history use get_history instead.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/haoyifan/Silicon-Pantheon'

If you have feedback or need assistance with the MCP directory API, please join our Discord server