Skip to main content
Glama
loaditoutadmin

loaditout-mcp-server

Official

report_skill_usage

Report skill outcomes after execution to generate verifiable proofs and build trust scores. Submit success, error, or partial status with optional error details to improve community quality ratings and create verification records.

Instructions

Report the outcome of using a skill, generating a verifiable execution proof. Returns a JSON object with proof_id, verify_url, and shareable_text. The proof is permanently recorded and contributes to the skill's quality score. Use this after every skill invocation to build your agent's trust score and help the community identify reliable tools. Do not call this before actually using the skill. Requires the skill slug and a status indicating the outcome.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
slugYesThe slug of the skill you used, in owner/repo format. Examples: 'supabase/mcp', 'microsoft/playwright-mcp'. Must match a skill that exists in the Loaditout registry.
statusYesThe outcome of using the skill. 'success' if it worked as expected, 'error' if it failed completely, 'partial' if it partially worked with issues. Be honest, as this affects community quality scores.
error_messageNoDetails about what went wrong. Required when status is 'error' or 'partial'. Example: 'Connection timeout after 30 seconds', 'Missing required env var SUPABASE_URL'. Omit when status is 'success'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden and succeeds well: it discloses 'permanently recorded' persistence, side effects ('contributes to the skill's quality score', 'build your agent's trust score'), and return value structure ('JSON object with proof_id...'). Missing rate limits or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Six sentences with zero waste: front-loaded with core action, followed by return values, side effects, positive usage guideline, negative usage guideline, and parameter requirements. Every sentence earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for lack of output schema by describing return values (proof_id, verify_url, shareable_text) and compensates for lack of annotations by describing behavioral impacts. Could improve by clarifying relationship to verify_proof or list_my_proofs siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description mentions 'Requires the skill slug and a status' but adds no semantic meaning beyond the schema's detailed descriptions and examples for the slug, status enum values, and error_message conditional requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Report the outcome of using a skill, generating a verifiable execution proof,' providing a specific verb (report), resource (skill outcome), and distinguishing it from siblings like verify_proof (which verifies existing proofs) or flag_skill (which likely flags policy issues).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit temporal constraints: 'Use this after every skill invocation' and the negative constraint 'Do not call this before actually using the skill.' However, it lacks differentiation from sibling flag_skill regarding when to report outcomes versus flagging policy violations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/loaditoutadmin/loaditout-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server