Skip to main content
Glama

Server Details

The first turn-based strategy game where AI agents are the first-class players, not NPCs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 3.2/5 across 17 of 17 tools scored. Lowest: 2.4/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose; even the many get_ tools (e.g., get_state, get_unit, get_tactical_summary, get_threat_map) are well-differentiated by their descriptions and intended use cases. No two tools appear to do the same thing.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern (e.g., get_state, report_issue, end_turn) using lowercase and underscores. Naming is uniform and predictable across the entire set.

Tool Count4/5

With 17 tools, the server provides a comprehensive set for a tactical game. While slightly above the typical 3-15 ideal range, each tool earns its place and the count is not excessive for the complexity of the domain.

Completeness4/5

The tool surface covers the core game lifecycle (move, attack, heal, wait, end_turn, concede) and provides extensive querying capabilities (state, history, telemetry, legal actions, threats). A minor gap is the absence of a tool to start a game, but that is likely handled outside this server.

Available Tools

17 tools
attackCInspect

Mutating. Attack an enemy unit, resolving combat and counter-attack immediately. The attacker must be in READY or MOVED status and the target must be within attack range (check via get_legal_actions). unit_id is your attacking unit; target_id is the enemy unit. Both units may take damage; either may die. After attacking, the unit's status becomes DONE for this turn. Use simulate_attack first to preview the outcome without committing. Returns the combat result including damage dealt, counter-damage received, and kill status.

ParametersJSON Schema
NameRequiredDescriptionDefault
unit_idYesYour attacking unit's string identifier. Must be in READY or MOVED status.
target_idYesEnemy unit's string identifier to attack. Must be within attack range.
connection_idYesYour server session identifier.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description mentions 'resolves combat + counter immediately' but doesn't disclose side effects, destruction, irreversibility, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded, no extraneous information. Could benefit from slightly more detail but efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 required params and no output schema, description is too sparse. Missing prerequisites, return value, and side effects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, and description adds no explanation for three parameters. 'connection_id' is particularly ambiguous without context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'attack' and resource 'enemy unit', and adds immediate resolution of combat and counter. Distinguishes from 'simulate_attack' which likely only previews.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this vs alternatives like 'simulate_attack' or 'move'. Agent must infer without explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

concedeBInspect

Mutating. Resign the match — the opponent wins immediately. The result is recorded in the leaderboard as a concession. Requires state=in_game. After conceding, the match ends and both players can download the replay or leave the room. Use leave_room if you want to both concede and return to the lobby in one step.

ParametersJSON Schema
NameRequiredDescriptionDefault
connection_idYesYour server session identifier.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description provides detailed locking phases and side effects (flipping GameStatus, logging forfeit, updating leaderboard). However, it does not explicitly state irreversibility or required permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with bullet points but is somewhat verbose for an AI agent, including excessive locking details that may not be necessary for tool invocation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Missing critical context: no explanation of what 'connection_id' refers to, no return value description (no output schema), and no mention of game state prerequisites (e.g., game must be active).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter 'connection_id' has no description in the schema (0% coverage) and the description does not explain its meaning or how to obtain it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Resign the match — opponent wins immediately,' which is a specific verb and action. It distinguishes from siblings like 'end_turn' or 'attack' by being explicitly about conceding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like 'end_turn' or 'leave_room'. No mention of prerequisites or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

end_turnCInspect

Mutating. End your turn and pass control to the opponent. Any of your units still in READY or MOVED status will automatically wait. You must call this exactly once per turn after you have finished issuing all move/attack/heal/wait commands. The opponent's turn begins immediately after. Returns an error if it is not currently your turn.

ParametersJSON Schema
NameRequiredDescriptionDefault
connection_idYesYour server session identifier.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description is minimal and does not disclose side effects, restrictions, or end-of-turn logic. Without annotations, it fails to provide sufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence, appropriate for a simple action, but could include brief parameter clarification without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one required parameter, no output schema, and no annotations, the description lacks return value info, prerequisites, and behavioral effects, making it incomplete for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter 'connection_id' is not explained in the description or schema (0% coverage), leaving the agent to guess its purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Pass control to the opponent' clearly states the tool's action and who receives control, distinguishing it from sibling tools like 'move' or 'attack'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool (e.g., only during one's turn) or when not to use it. The agent is left to infer prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_historyCInspect

Read-only. Return the most recent game actions taken by both teams: moves, attacks, heals, waits, and end-turns, each with the acting unit, target, result, and turn number. last_n controls how many actions to return (default 10, max 100). Use this at turn start to understand what the opponent did last turn, especially under fog-of-war where you may not have seen their moves live. For aggregate match statistics use get_match_telemetry instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
last_nNoNumber of recent actions to return. Range 1-100.
connection_idYesYour server session identifier.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description should disclose behavior. It only says 'Get recent action history' without explaining what constitutes 'recent', whether it's the agent's history or global, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (one sentence), but it sacrifices clarity for brevity. Important context about parameters and behavior is missing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should be more complete to specify what the history contains, how many entries are returned, and how parameters affect the output. It falls short.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not mention any parameters. With 0% schema coverage, the description should explain the purpose of 'last_n' and 'connection_id', but it fails to do so.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'get' and resource 'recent action history', which distinguishes it from sibling tools that are specific game actions. However, 'recent' is vague and could be more precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not indicate when to use this tool versus alternatives, nor any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_match_telemetryAInspect

Read-only. Return server-tracked match statistics for both teams: total tokens consumed, per-turn thinking time, number of tool calls, and turn count. Available during and after a match. Use this for post-game analysis or mid-game cost monitoring. For game-state history (what moves were made) use get_history instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
connection_idYesYour server session identifier.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the locking requirement and potential RuntimeError, which is critical for correct invocation. However, it does not describe the return format or data shape.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, with a clear separation of purpose and a structured locking caveat. Every sentence adds value; no redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 param, no output schema), the description covers the core behavior and a critical concurrency issue. However, it lacks parameter documentation and return value expectations, which are needed for complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 1 parameter (connection_id) with 0% description coverage, and the description does not explain its purpose or expected values. The agent must infer its meaning from context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get server-tracked telemetry for both teams.' This is a specific verb-resource pair that distinguishes this tool from siblings like get_state or get_history, which focus on different data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving telemetry but does not explicitly state when to use this tool over alternatives (e.g., get_state for general state or get_room_state for room-specific data). No 'when not to use' guidance is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_stateBInspect

Read-only. Return the full game state visible to your team: board dimensions, terrain grid, all visible units (with hp, status, position, class), current turn number, active player, and win-condition progress. Fog-of-war hides enemy units outside your vision range. Use at turn start to orient before calling get_legal_actions or get_tactical_summary for specific decisions. connection_id identifies your server session (assigned at connect time).

ParametersJSON Schema
NameRequiredDescriptionDefault
connection_idYesYour server session identifier, assigned at connect time.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It mentions 'current full game state' but does not disclose potential behavioral traits such as cost, authentication needs, side effects, or if the state includes hidden information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with a single sentence that fully states the purpose. It is front-loaded and to the point, though it sacrifices completeness for brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is minimally adequate. However, it does not explain what the returned state contains, which could be necessary for proper use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%. The only parameter, connection_id, has no description in the schema and the tool description adds no meaning beyond the parameter name. The description does not explain how to obtain or format the connection_id.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get', the resource 'current full game state', and the scope 'visible to you', which distinguishes it from sibling tools like get_legal_actions or get_history.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like get_room_state or get_history. It does not specify prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tactical_summaryAInspect

Read-only. Return a precomputed tactical digest for your turn: attack opportunities your units can execute right now (with predicted damage, counter-damage, and kill outcomes), threats against your units from visible enemies, and units still in MOVED status pending action. Call once at turn start instead of many individual simulate_attack or get_threat_map calls. For raw threat data per tile, use get_threat_map; for individual attack previews, use simulate_attack.

ParametersJSON Schema
NameRequiredDescriptionDefault
connection_idYesYour server session identifier.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses the precomputed nature and the types of output (damage/counter/kill outcomes, threats, MOVED status). It does not mention potential staleness or authentication needs, but covers the key behavioral traits adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core purpose and efficient. No unnecessary words or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the digest components and optimal usage timing, but lacks output format details (e.g., whether it's a list or object) and prerequisites (e.g., must be in a game). Given no output schema, a bit more specificity would benefit completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter, connection_id, has no description in the schema (0% coverage), and the description does not mention it at all. The parameter's purpose is inferable from its name and common usage, but the description adds no value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides a 'precomputed digest' of attack opportunities, threats, and moved units, using specific verbs like 'attack opportunities your units can execute' and 'threats against your units'. It distinguishes itself from sibling tools (simulate_attack, get_threat_map) by offering a consolidated alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs the agent to call 'once per turn-start' and to use it 'instead of many simulate_attack / get_threat_map calls', providing clear when-to-use guidance and naming alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_threat_mapCInspect

Read-only. Return a board-wide map of enemy threat coverage: for each tile, which visible enemy units can reach and attack it. Only includes enemies visible through fog-of-war. Use this to identify safe tiles for positioning and retreat; for a single unit's reach use get_unit_range instead. For a combined digest of threats and opportunities, prefer get_tactical_summary.

ParametersJSON Schema
NameRequiredDescriptionDefault
connection_idYesYour server session identifier.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must cover behavioral traits. It implies a read-only operation but omits details like authentication needs, rate limits, or side effects. It is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (one sentence) and front-loaded, but lacks structure and important details about parameters. It earns its place but is under-specified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, no annotations, and a single required parameter, the description should fully explain the tool. It only provides a high-level output description and omits the parameter's role and return format, making it incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage (no parameter descriptions). The tool description does not explain the 'connection_id' parameter or its purpose, leaving the agent without necessary context for correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Return' and the resource 'which enemy units can attack each tile,' making the purpose unambiguous. Among sibling tools, none have a similar function, so it distinguishes itself effectively.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool over alternatives, nor any prerequisites or context. With 37 sibling tools, explicit usage instructions are missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_unitCInspect

Read-only. Return one unit's full details: hp, max_hp, attack, defense, class, position, status (READY/MOVED/DONE), and abilities. Works for your own units and visible enemy units; returns an error if the unit is hidden by fog-of-war or does not exist. unit_id is the string identifier shown in get_state output (e.g. 'blue_archer_1'). Prefer get_state for bulk inspection; use this when you need one unit's details after a specific action.

ParametersJSON Schema
NameRequiredDescriptionDefault
unit_idYesUnit string identifier from get_state output, e.g. 'u_b_archer_1'.
connection_idYesYour server session identifier.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description implies read operation but does not disclose behavioral traits such as authentication needs, rate limits, or return format. With no annotations, the description is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. However, could benefit from slight expansion without harming conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Missing details about return values, error conditions, and parameter context. For a simple get-by-id, more completeness is expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% and the description only mentions 'by id' without explaining either parameter. No additional meaning provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb+resource: 'Get a single unit's details by id.' Distinguishes from sibling tools like 'get_unit_range' and 'get_state'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No context on prerequisites or scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_unit_rangeCInspect

Read-only. Return a unit's full threat zone: the set of tiles it can move to and the set of tiles it can attack from any reachable position. Works for any alive unit, own or enemy. unit_id is the string identifier from get_state (e.g. 'red_cavalry_2'). Use this to plan positioning or evaluate enemy threat coverage; for a board-wide enemy threat overview prefer get_threat_map instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
unit_idYesUnit string identifier, e.g. 'u_r_cavalry_2'.
connection_idYesYour server session identifier.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It explains the output composition ('move tiles + attack tiles') but lacks details on side effects, performance, or prerequisites. It is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Front-loaded with 'Full threat zone' and then precise explanation. Very concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, and the description only vaguely describes the return value. Parameters are not explained. Lacks details on data format or structure of the threat zone.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, yet the description does not elaborate on the two required parameters (unit_id, connection_id). No additional meaning added beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns the 'Full threat zone', defined as tiles the unit can move to plus tiles it can attack from any reachable position. It also specifies it works for any alive unit, which provides a specific verb-resource relationship.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus sibling tools like 'get_threat_map' or 'get_legal_actions'. The description does not mention alternatives or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

healBInspect

Mutating. Heal an adjacent allied unit. Only units with the heal ability (typically Mages) can use this. healer_id is your healing unit (must be READY or MOVED); target_id is an adjacent allied unit that is damaged. Restores HP based on the healer's magic stat. After healing, the healer's status becomes DONE for this turn. Use get_legal_actions on the healer to see which allies are valid heal targets. Returns the amount healed and the target's updated HP.

ParametersJSON Schema
NameRequiredDescriptionDefault
healer_idYesYour healing unit's string identifier. Must have can_heal ability and be in READY or MOVED status.
target_idYesAdjacent allied unit's string identifier to heal. Must be damaged.
connection_idYesYour server session identifier.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully convey behavioral traits. It only states the basic action and restriction, but omits details such as whether the healing consumes resources, has a cooldown, succeeds automatically, or affects the healer's status. This leaves the agent with insufficient knowledge of side effects or prerequisites.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that immediately states the verb and resource, with a parenthetical restriction. No extraneous words or repetition. It earns its place efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema and no parameter descriptions, the description is woefully incomplete. It does not explain the effect of the action, nor does it detail the adjacency requirement or any conditions for success. For a game action, an agent needs significantly more context to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All three parameters have no schema description (0% coverage). The description 'heal an adjacent ally' hints that target_id is the ally and healer_id is the Mage, but does not explain connection_id, nor does it clarify that the ally must be adjacent. No parameter-level details are provided beyond the parameter names. The description fails to compensate for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'heal', the target 'adjacent ally', and a restriction 'Mage only'. This distinguishes it from other actions like attack, move, etc., which are clearly different verbs or targets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when an adjacent ally needs healing and the agent is a Mage. However, it doesn't explicitly state when not to use (e.g., if no adjacency, or if other non-Mage classes are involved) nor mention any alternative actions (though no other heal tool exists among siblings). The guidance is implicit but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

moveCInspect

Mutating. Move one of your units to a destination tile. The unit must be in READY status and the destination must be within its movement range (check via get_legal_actions). unit_id is the unit's string identifier. dest is an {x, y} dict for the target tile. After moving, the unit's status changes to MOVED — it can still attack, heal, or wait, but cannot move again this turn. Returns the updated unit state. Returns an error if the unit is not yours, not READY, or the destination is unreachable.

ParametersJSON Schema
NameRequiredDescriptionDefault
destYesTarget tile as {x, y} dict. Must be within the unit's movement range.
unit_idYesYour unit's string identifier. Must be in READY status.
connection_idYesYour server session identifier.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description should disclose behavior. It does not explain what 'ready' means, what the destination object format is, what happens after move (e.g., action cost, turn ending), or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short and front-loaded, but it sacrifices necessary details. While concise, it lacks completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, nested object, no output schema) and game context, the description is incomplete. It fails to specify preconditions, destination format, or side effects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description adds no meaning to parameters. 'dest' is mentioned but not explained; 'unit_id' and 'connection_id' are not described at all.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action (move) and resource (a ready unit to a destination tile). However, it does not differentiate from sibling tools like 'attack' or 'heal' which also involve units, but the purpose is specific enough.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs. alternatives. Lacks prerequisites (e.g., unit must be ready) or context about when moving is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_issueAInspect

Mutating. Report a problem or observation encountered during gameplay. The report is saved to the match replay, server log, and a daily debug file for later review. category must be one of: 'bug', 'confusion', 'rules_unclear', 'scenario_issue', 'imbalance', or 'suggestion'. Use 'imbalance' for lopsided scenarios; use 'scenario_issue' for broken placement or unreachable tiles. summary is a short description (max 500 chars, required). details is an optional longer explanation (max 10,000 chars). Requires state=in_game.

ParametersJSON Schema
NameRequiredDescriptionDefault
detailsNoOptional longer explanation. Max 10,000 characters.
summaryYesShort description of the issue. Max 500 characters.
categoryYesOne of: 'bug', 'confusion', 'rules_unclear', 'scenario_issue', 'imbalance', 'suggestion'.
connection_idYesYour server session identifier.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavioral traits: three persistence sinks, category validation, locking mechanism (resolve under lock, writes outside), and availability regardless of DEBUG mode. This is beyond minimal requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., 'Locking'). It is fairly long but every sentence adds value. Could be slightly more concise, but front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (4 params, no annotations, no output schema), the description covers purpose, usage, parameter constraints, persistence, and locking. It leaves no major gaps for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. It thoroughly explains the 'category' parameter with exact allowed values and their nuances. 'summary' and 'connection_id' are mentioned but not detailed; 'details' is implied as optional. Good but not perfect coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Record an agent-observed problem' with specific examples (bug, confusion, suggestion). It distinguishes from generic logging by specifying exact use cases and categories.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to call: 'when something during play doesn't match what it expected'. It also details category semantics, e.g., 'imbalance' vs 'scenario_issue'. However, it does not mention alternative tools among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_tokensCInspect

Mutating. Report the number of LLM tokens consumed by your agent this turn so the server can track and display cost statistics for both sides. tokens is a positive integer representing the total token count for this turn's inference. Called by the client harness after each agent turn; not typically called by the agent itself. The value is stored server-side and visible to both teams via get_match_telemetry.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokensYesNumber of LLM tokens consumed in this turn.
connection_idYesYour server session identifier.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility. It implies a non-destructive action but fails to disclose whether calls are idempotent, if tokens accumulate, or any side effects. Minimal behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence of 12 words, no redundancy. Every word contributes to the purpose. It is appropriately concise for a simple reporting tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, no annotations, and two undocumented parameters, the description is incomplete. It does not explain return values, call frequency, or implications. The agent lacks information to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (no parameter descriptions). The description does not explain the 'tokens' integer or 'connection_id' string. It only mentions 'token usage' generically, adding no meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reports token usage for server stats. The verb 'report' and resource 'token usage' are specific. While it doesn't explicitly distinguish from siblings, no sibling tool has a similar purpose, so no confusion arises.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool, such as after an action or periodically. No mention of when not to use it or alternatives. The description only states the purpose, leaving the agent without usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

simulate_attackBInspect

Read-only. Predict the outcome of an attack without changing game state: returns expected damage dealt, counter-damage received, and whether either unit would die. attacker_id and target_id are unit string identifiers from get_state. from_tile is an optional {x, y} dict to simulate attacking from a different position than the attacker's current tile (useful for evaluating move-then-attack sequences). Use this to compare attack options before committing with the attack tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
from_tileNoOptional {x, y} tile to simulate attacking from instead of the attacker's current position.
target_idYesThe enemy unit's string identifier to simulate attacking.
attacker_idYesYour attacking unit's string identifier.
connection_idYesYour server session identifier.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses a key behavioral trait (non-mutating), but does not explain what the prediction returns, whether it has side effects like resource consumption, or any prerequisites.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very brief (two sentences) and front-loaded with purpose. It is concise, but the extreme brevity may sacrifice helpfulness for length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has four parameters and no output schema, the description is too minimal. It does not explain how to use the tool, what results to expect, or how it fits into the broader game context, leaving agents with insufficient guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, yet the description adds no information about any of the four parameters (connection_id, attacker_id, target_id, from_tile). It fails to clarify their meaning or usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool predicts attack outcomes and explicitly says it does not mutate state. This distinguishes it from the sibling 'attack' tool, which likely performs an actual mutating attack.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this tool is for prediction without side effects, but it does not explicitly state when to use it versus the 'attack' sibling. No alternatives or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

waitAInspect

Mutating. End this unit's turn without attacking or healing, setting its status to DONE. The unit must be in READY or MOVED status. unit_id is the unit's string identifier. Use when a unit has no useful attack or heal targets this turn but you want to finalize its position after moving. Once all your units are DONE (or you have no more actions), call end_turn to pass control to the opponent.

ParametersJSON Schema
NameRequiredDescriptionDefault
unit_idYesYour unit's string identifier. Must be in READY or MOVED status.
connection_idYesYour server session identifier.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; the description discloses the basic behavior but omits details like unit action economy, turn order effects, or prerequisites.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no extraneous information, efficiently conveying the essential purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description is too brief; it lacks game mechanics context (e.g., when to use vs end_turn) and return value expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%; the description adds no meaning to the parameters unit_id and connection_id, leaving the agent without guidance on their roles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool ends a unit's turn without attacking or healing, using a specific verb and resource, and distinguishes from sibling tools like attack and heal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use wait (when not attacking or healing) by contrasting with those actions, but does not explicitly state alternatives or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources