Skip to main content
Glama

report_outcome

Report API or MCP server call outcomes to contribute to community reliability ratings. Submit success, error, timeout, or unexpected results with latency and error details.

Instructions

Leave a review — report the outcome of an API or MCP server call you just made. Like writing a TrustPilot review, your report contributes to the community reliability rating for that service.

The more agents that report, the more accurate the ratings become for everyone.

Args: service: Service slug that was called outcome: Result — 'success', 'error', 'timeout', or 'unexpected' latency_ms: Response time in milliseconds (if available) error_type: If outcome was 'error', the category: 'auth', 'rate_limit', 'server', 'network', 'parse', 'other'

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
serviceYes
outcomeYes
latency_msNo
error_typeNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The 'report_outcome' tool is defined as an asynchronous function decorated with @mcp.tool. It takes service information, outcome, latency, and error details, and sends a POST request to the Preflight API to record the report.
    @mcp.tool
    async def report_outcome(
        service: str,
        outcome: str,
        latency_ms: int | None = None,
        error_type: str | None = None,
    ) -> dict:
        """Leave a review — report the outcome of an API or MCP server call you
        just made. Like writing a TrustPilot review, your report contributes to
        the community reliability rating for that service.
    
        The more agents that report, the more accurate the ratings become for everyone.
    
        Args:
            service: Service slug that was called
            outcome: Result — 'success', 'error', 'timeout', or 'unexpected'
            latency_ms: Response time in milliseconds (if available)
            error_type: If outcome was 'error', the category:
                        'auth', 'rate_limit', 'server', 'network', 'parse', 'other'
        """
    
        payload = {"service": service, "outcome": outcome}
    
        if latency_ms is not None:
            payload["latency_ms"] = latency_ms
    
        if error_type is not None:
            payload["error_type"] = error_type
    
        try:
            async with httpx.AsyncClient(timeout=10) as client:
                resp = await client.post(
                    f"{PREFLIGHT_API}/v1/report",
                    json=payload,
                    headers=_headers(),
                )
    
                resp.raise_for_status()
    
                return resp.json()
        except httpx.HTTPStatusError as exc:
            return {
                "error": True,
                "status_code": exc.response.status_code,
                "detail": exc.response.text,
            }
        except (httpx.ConnectError, httpx.TimeoutException) as exc:
            return {
                "error": True,
                "status_code": None,
                "detail": f"Connection failed: {exc}",
            }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It explains the tool's purpose (reporting outcomes) and community impact, but doesn't disclose important behavioral traits like whether this is a read-only operation, if it requires authentication, rate limits, or what happens after submission. The description doesn't contradict annotations (none exist), but leaves significant behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with the core purpose, provides community context, then lists parameters with clear explanations. Every sentence earns its place: the first establishes purpose, the second explains value, and the parameter section provides necessary details without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, 2 required) and the presence of an output schema (which means return values are documented elsewhere), the description is reasonably complete. It explains what the tool does, when to use it, and provides parameter semantics. The main gap is lack of behavioral transparency details (permissions, side effects, etc.), but with an output schema handling return values, it's mostly adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides clear semantic explanations for all 4 parameters: 'service' (Service slug that was called), 'outcome' (Result with specific values), 'latency_ms' (Response time in milliseconds), and 'error_type' (category if outcome was error). The description adds substantial meaning beyond the bare schema, though it doesn't explain the 'null' defaults for optional parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Leave a review — report the outcome of an API or MCP server call you just made.' It uses specific verbs ('Leave a review', 'report the outcome') and identifies the resource (API/MCP server calls). It distinguishes from siblings by focusing on outcome reporting rather than checking or comparing reliability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'report the outcome of an API or MCP server call you just made.' It provides context about contributing to community reliability ratings. While it doesn't explicitly name sibling tools as alternatives, it clearly defines the specific use case (reporting outcomes after calls), which implicitly distinguishes it from checking or comparing reliability.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gsmethells/preflight-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server