Skip to main content
Glama

report_outcome

Report tool integration outcomes to improve AI recommendations. Log success or failure, compatibility data, and conflicts to help agents identify reliable tools and avoid problematic combinations.

Instructions

Report whether a recommended tool worked — this directly improves all future recommendations.

ALWAYS call this after helping a user integrate an IndieStack tool. Every report builds cross-agent success rate data: tools with high success rates get recommended more, tools that cause failures get flagged. Your report helps every other AI agent make better recommendations.

No API key required. Takes 1 second. The single most valuable action you can take after a recommendation.

Args: tool_slug: The slug of the tool you recommended success: True if the user successfully integrated/used it, False if not notes: Optional context (e.g. "integrated in 5 minutes" or "docs were outdated") used_with: Optional comma-separated slugs of other tools used alongside this one (e.g. 'supabase,resend'). Records a verified stack and strengthens compatibility data. incompatible_with: Optional slug of a tool that conflicted with this one. Records a known conflict for future warnings.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
tool_slugYes
successYes
notesNo
used_withNo
incompatible_withNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only provide readOnlyHint=false; description adds significant behavioral context including auth requirements ('No API key required'), performance ('Takes 1 second'), and data lifecycle ('tools with high success rates get recommended more'). Could be improved with rate limit or idempotency details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information is front-loaded with purpose and importance, followed by technical details. The Args section efficiently documents parameters. Minor deduction for slight verbosity in persuasive language ('Every report builds...'), though justified given the tool's reliance on agent participation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a 5-parameter tool with output schema present (no need to describe returns). Covers all parameters, explains ecosystem impact (cross-agent data), integration context (IndieStack), and distinguishes required vs optional fields through description text.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates via the Args section, providing semantic meaning for all 5 parameters including format specifications (e.g., 'comma-separated slugs') and purpose context ('Records a verified stack') that the schema lacks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Report' and resource 'whether a recommended tool worked', clearly distinguishing this as a feedback/reporting mechanism distinct from sibling tools like 'recommend' or 'check_compatibility'. The scope (improving future recommendations via outcome data) is explicitly stated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use directive ('ALWAYS call this after helping a user integrate an IndieStack tool') and emphasizes urgency ('single most valuable action'). However, lacks explicit when-NOT-to-use guidance or named alternatives to avoid confusion with 'report_compatibility'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Pattyboi101/indiestack'

If you have feedback or need assistance with the MCP directory API, please join our Discord server