Skip to main content
Glama

ck_outcome_tracker

Record session outcomes and retrieve agent performance leaderboards to close the reinforcement-learning feedback loop.

Instructions

Record session outcomes or retrieve agent performance leaderboards to close the reinforcement-learning feedback loop. Three modes: record persists a session outcome (write operation); get_session reads a specific outcome by session_id (read-only); get_leaderboard returns ranked agent performance (read-only). For record mode: pass session_id, outcome (success/partial/failure), agent_id, and task_type. For get_leaderboard: pass workspace_id and optional window (days) and limit. Call after task completion before ending the session so ck_route and ck_cost_optimizer have fresh performance data for future routing decisions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
agent_idNo
limitNoMaximum number of results to return.
modeYesOperation mode that determines the tool behavior and return shape.
outcomeNoResult classification of the operation.
session_idNoUnique session identifier for correlating findings, proofs, budget, and audit trail.
task_typeNo
windowNo
workspace_idNoWorkspace identifier for cross-session scope.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It correctly identifies record as a write operation and the other two as read-only. However, it does not clarify whether record overwrites existing entries or handles duplicates, nor does it mention error responses or side effects beyond persistence.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single well-organized paragraph: first sentence gives overarching purpose, then enumerates modes with their specific parameters, ending with usage timing. No unnecessary words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description omits any indication of return values for any mode. The agent does not know what to expect from record, get_session, or get_leaderboard, which is essential for correct invocation and result processing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 63% (moderate), but the description adds significant value by grouping parameters by mode and specifying valid values for outcome (success/partial/failure). This clarifies parameter relationships beyond the schema's individual descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's three modes (record, get_session, get_leaderboard) and their individual purposes. It differentiates the tool from siblings by explaining how its outputs feed into ck_route and ck_cost_optimizer, making its role in the system clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly specifies when to use the tool: "Call after task completion before ending the session so ck_route and ck_cost_optimizer have fresh performance data." It details required parameters per mode and mentions sibling tools by name, providing clear usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aryaminus/controlkeel'

If you have feedback or need assistance with the MCP directory API, please join our Discord server