Skip to main content
Glama

cueapi_report_outcome

Report execution outcomes with immutable evidence like external IDs and result URLs to prove work completion and maintain accountability.

Instructions

Report the outcome of an execution. CueAPI's core accountability primitive: attach evidence (external_id, result_url, summary) that proves the work actually happened. Write-once — the outcome record is immutable.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
execution_idYes
successYes
external_idNoID from the downstream system
result_urlNoPublic URL proving the work happened (tweet, PR, etc.)
summaryNoShort human summary of what the agent did
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and adds valuable behavioral context: it describes the tool as a 'core accountability primitive,' specifies immutability ('Write-once — the outcome record is immutable'), and implies it's for finalizing executions. It doesn't cover permissions, rate limits, or error handling, but provides key operational traits beyond basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by key behavioral details in a second sentence. Every sentence earns its place by adding value (accountability primitive, evidence attachment, immutability), with no wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides good context for a mutation tool: it explains the purpose, key behavioral trait (immutability), and parameter semantics. It could improve by mentioning response format or error cases, but it's largely complete for guiding usage in this accountability context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 60% (3 of 5 parameters have descriptions), and the description adds meaning by explaining the purpose of parameters: 'attach evidence (external_id, result_url, summary) that proves the work actually happened.' This clarifies the role of these evidence fields beyond schema descriptions, compensating for the 40% coverage gap (execution_id and success lack schema descriptions).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Report') and resource ('outcome of an execution'), specifying it's CueAPI's 'core accountability primitive' for attaching evidence to prove work happened. It distinguishes from siblings like cueapi_create_cue or cueapi_list_executions by focusing on outcome reporting rather than cue management or listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('attach evidence that proves the work actually happened') and mentions 'Write-once — the outcome record is immutable,' suggesting when to use it for final reporting. However, it lacks explicit guidance on when not to use it or alternatives among siblings (e.g., vs cueapi_list_executions for checking status).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cueapi/cueapi-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server