Skip to main content
Glama

Server Details

Real engineering memory marketplace for autonomous AI agents. Query structured lessons (problem → root cause → solution + confidence 80-100%) extracted from real GitHub/Slack incidents. Categories: debugging, architecture, performance, security, infrastructure & more.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 9 of 9 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct but there is potential confusion between query_lessons and query_lessons_full (summaries vs. full details). The three 'record_*' tools are also similar but target different aspects of a task run. Overall, an agent can distinguish with careful reading.

Naming Consistency4/5

All tools follow a verb_noun pattern (e.g., start_task_run, record_reflection). The pair query_lessons and query_lessons_full introduces a slight inconsistency where the suffix '_full' is used instead of a more descriptive verb. Otherwise consistent.

Tool Count4/5

9 tools is a reasonable number covering task run management and lessons. It is not too heavy or too sparse for the domain, though some tools (like the three record_*) could be consolidated.

Completeness3/5

The tool surface covers starting and completing tasks, recording metadata, querying lessons, and submitting incidents. However, missing tools for updating tasks, retrieving task details, or modifying lesson entries. Additionally, query_lessons_full requires a special key, which may create a dead end for some agents.

Available Tools

9 tools
complete_task_runCInspect

Complete or update the final status of a Hermes task run with final result, verification evidence, and blockers. Requires x-agent-key and task ownership.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusYes
blockersNo
task_run_idYes
final_resultNo
verificationNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It only states a security requirement (agent key, ownership) but omits side effects, idempotency, or what happens if the task is already in a terminal state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no redundancy. However, it could benefit from breaking out requirements or listing key parameters for better scannability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters (including enums and nested objects) and no output schema, the description is too sparse. It fails to explain status options, the structure of verification/blockers, or the expected response, leaving significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It mentions three parameters (final_result, verification, blockers) but does not explain the status enum, required task_run_id, or parameter formats. Partial compensation but insufficient for full clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool completes or updates the final status of a Hermes task run with specific fields (result, verification, blockers). The verb-resource pairing is clear, but the term 'final status' is slightly ambiguous given the enum includes non-terminal states like 'in_progress' and 'verifying', reducing precision.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions a prerequisite (requires x-agent-key and task ownership) but provides no guidance on when to use this tool versus sibling tools like start_task_run, or when not to use it. No explicit context or alternatives are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_lessonsAInspect

Search Younanix engineering lessons by natural language query. Returns problem summaries with category, confidence, and tags. Full lesson details (root cause, lesson learned, evidence) require the paid agent-query endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language search query
max_resultsNoMaximum number of results to return (default 10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, description carries burden of disclosure. Adds valuable return value structure ('problem summaries with category, confidence, and tags') since no output schema exists. However, omits explicit safety profile (read-only status), rate limits, or auth requirements that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose declaration, return value specification, and capability limitation. Front-loaded with the core action and appropriately sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 2-parameter tool. Compensates for missing output schema by describing return fields. Mentions domain-specific context ('Younanix engineering'). Minor gap: could note behavior on empty results or pagination beyond the max_results parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline. Description mentions 'natural language query' which aligns with the schema but adds no additional syntax guidance, examples, or constraints beyond what the schema already documents for the two parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Search') and resource ('Younanix engineering lessons') clearly stated. Distinguishes scope from the 'paid agent-query endpoint' mentioned for full details, effectively differentiating this tool's capabilities despite no siblings on this server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage scope by stating what the tool returns (summaries) versus what requires the paid alternative (full details including root cause). Lacks explicit 'when to use' statement but provides clear boundary guidance through the comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_lessons_fullAInspect

Returns full Younanix Arete V7 lesson envelope (root_cause, lesson_learned, evidence, recency_tier, pattern_strength, etc.). Requires a valid x-agent-key header issued by Younanix admin, otherwise returns an upgrade hint pointing at the x402 paid endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language search query
max_resultsNoMaximum number of results (default 10, max 25)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses important behavioral traits: authentication via x-agent-key header, and that unauthorized use returns an upgrade hint. It also lists return fields. Since annotations are absent, this description carries the burden well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is fairly concise at one sentence, providing essential information without waste. It could be slightly improved with structural formatting, but it is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the medium complexity (no nested objects, no output schema, 2 parameters), the description covers key aspects: what is returned and authentication. However, it does not explain the upgrade hint or explicitly state the tool's mutation/read-only nature, which is a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes both parameters. However, the description adds no extra meaning beyond what the schema provides, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a full Younanix Arete V7 lesson envelope, listing specific fields. It distinguishes itself from siblings by mentioning authentication requirements and a paid endpoint, implying it is a richer version of 'query_lessons'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for querying lessons with authentication, but does not explicitly state when to use this versus 'query_lessons' or the other siblings. It mentions an x-agent-key header requirement, but no guidance on alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

record_reflectionBInspect

Record the structured self-reflection for a Hermes task run and update the parent run reflection summary. Requires x-agent-key and task ownership.

ParametersJSON Schema
NameRequiredDescriptionDefault
outcomeYes
evidenceNo
task_run_idYes
lessons_usedNo
what_went_wellNo
sub_agents_usedNo
improvement_next_timeNo
what_failed_or_uncertainNo
younanix_feedback_actionNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It notes that the tool updates the parent run reflection summary (mutation) and requires authentication. However, it omits details on idempotency, error conditions, or side effects beyond the update. Minimal but adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the purpose and then stating the prerequisite. Every word earns its place, with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (9 parameters, nested objects, no output schema, no parameter descriptions), the description is severely incomplete. It does not explain what constitutes a 'structured self-reflection', how parameters map to that, or what the update entails. The agent lacks essential context to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate, but it mentions no parameters. The 9 parameters (including nested objects and enums) are left unexplained. The phrase 'structured self-reflection' does not clarify parameter meanings. Insufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (record structured self-reflection) and the resource (Hermes task run, parent run reflection summary). It distinguishes from sibling tools like query_lessons or complete_task_run by specifying a unique purpose related to reflection recording.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions a prerequisite (requires x-agent-key and task ownership) but does not provide guidance on when to use this tool versus alternatives like record_subagent_trace or submit_incident_report. Usage context is implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

record_subagent_traceBInspect

Record a specialist sub-agent brief and output for an existing Hermes task run. Requires x-agent-key and task ownership.

ParametersJSON Schema
NameRequiredDescriptionDefault
briefYes
outputYes
statusNo
confidenceNo
task_run_idYes
subagent_nameYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only mentions the action and authentication requirement, omitting critical behavioral traits such as idempotency, error states (e.g., missing task run), side effects, or data overwrite behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loading the purpose and adding a key requirement in a second sentence. It avoids unnecessary words but could be more structured (e.g., listing parameters).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the complexity (6 parameters including nested required objects, no annotations, no output schema), the description provides minimal guidance. It lacks parameter explanations, behavioral details, and error handling information, making it insufficient for an agent to invoke correctly without prior knowledge.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning no parameter descriptions in the schema. The description fails to compensate, only implicitly referencing 'brief' and 'output' but not explaining their structure, valid values, or relationships. The two optional parameters (status, confidence) are unmentioned.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (record) and the resource (specialist sub-agent brief and output for existing Hermes task run). It differentiates from sibling record tools like record_reflection by specifying 'sub-agent trace'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when recording a sub-agent trace for a task run, but does not explicitly provide when-to-use or when-not-to-use guidance compared to alternatives like record_reflection or complete_task_run. The prerequisite about x-agent-key and task ownership is noted but not elaborated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

record_younanix_usageAInspect

Record a Younanix checkpoint/query used during a Hermes task run and update parent run summaries. Requires x-agent-key and task ownership.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
lesson_idsNo
access_pathYes
plan_impactNo
task_run_idYes
trigger_reasonYes
insufficient_knowledgeNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes the core action (recording and updating) and mentions authentication requirements. However, with no annotations, the description lacks detail on side effects, idempotency, or potential failures, leaving some behavioral ambiguity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: one sentence for the main purpose and one for a key requirement. No redundant or extraneous text, and the most critical information appears first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 7 parameters, 4 required, and no output schema, the description is too sparse. It omits parameter constraints (e.g., format, allowed values), error conditions, and the structure of the return value. The note about 'updating parent run summaries' is vague.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description provides no additional information about the meaning or usage of any of the 7 parameters. The agent must rely solely on the parameter names, which may not be self-explanatory.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action (record) and the resource (Younanix checkpoint/query), and provides context (during Hermes task run, update parent run summaries). It distinguishes from sibling tools like 'complete_task_run' and 'record_reflection'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States a prerequisite ('Requires x-agent-key and task ownership'), which helps agents decide when to invoke. However, it does not explicitly contrast with alternatives or specify when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_skill_outcomeAInspect

Report the outcome of applying one or more Younanix lessons in production. Used to build the symbiotic feedback loop. Rate-limited to 100 reports per agent per day. Requires x-agent-key.

ParametersJSON Schema
NameRequiredDescriptionDefault
outcomeYes
lesson_idsYesUUIDs of the lessons applied
skill_nameNo
outcome_evidenceNo
mttr_minutes_observedNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses rate limiting (100/day) and auth requirement (x-agent-key), which are important behavioral traits beyond what annotations would provide. However, with no annotations supplied, the description provides good transparency for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no filler. Front-loads the action, follows with purpose, then constraints. Efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given moderate complexity (5 params, 2 required) and no output schema, the description covers the tool's purpose, constraints, and key parameters are partly described in schema. Missing details on optional parameters like outcome_evidence or mttr_minutes_observed, but adequate overall.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is low (20%) but the description doesn't elaborate on parameter meanings beyond what the schema provides. The description lists no parameter details, so baseline 3 applies due to low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'report' and the resource 'outcome of applying one or more Younanix lessons in production', distinguishing it from siblings like query_lessons (querying lessons) and submit_incident_report (incident reporting).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the purpose ('build the symbiotic feedback loop') and includes rate limits and auth requirements, but doesn't explicitly contrast with siblings or state when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

start_task_runBInspect

Start an auditable Hermes task run. Records objective, plan, task graph, autonomy/risk, and initial metadata. Requires x-agent-key.

ParametersJSON Schema
NameRequiredDescriptionDefault
planNo
contextNo
metadataNo
objectiveYes
task_typeNo
risk_levelNo
task_graphNo
autonomy_levelNo
sub_agents_usedNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates the tool is auditable and records specific items, but it fails to describe important traits such as side effects, idempotency, error behavior, or what happens if called multiple times. The description adds minimal behavioral context beyond the obvious create action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences that front-load the primary purpose ('Start an auditable Hermes task run') and immediately provide key details (what is recorded, required key). Every sentence earns its place without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, no schema descriptions, no output schema, no annotations), the description is inadequate. It does not explain return values, required versus optional parameters beyond objective, or how the tool fits into the workflow with siblings. The agent lacks sufficient context to use it correctly in many scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate by explaining all parameters. It mentions 'objective, plan, task graph, autonomy/risk, and initial metadata' but does not define or guide usage for 9 parameters including task_type, risk_level, autonomy_level, sub_agents_used, context, and metadata. This leaves significant ambiguity for the agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Start an auditable Hermes task run') and lists what it records (objective, plan, task graph, autonomy/risk, initial metadata). This distinguishes it from siblings like complete_task_run, which ends a run, and query tools. The verb-resource combination is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use to initiate a task run but does not explicitly state when to use it versus alternatives. The mention of 'Requires x-agent-key' is a prerequisite, not usage context. No guidance is given on when not to use or about alternative tools like complete_task_run, leaving the agent to infer appropriate usage from sibling names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_incident_reportBInspect

Submit a raw incident report from a production system. The report is queued for admin review and synthesis into a permanent Arete lesson. Requires a valid x-agent-key header.

ParametersJSON Schema
NameRequiredDescriptionDefault
stackYes
severityNo
affected_systemsYes
incident_summaryYesConcise narrative of what happened
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description explains that the report is queued (not processed immediately) and leads to a permanent lesson. It mentions authentication (requires header). With no annotations provided, the description carries the burden; it covers the queue behavior and header requirement but does not describe idempotency or error states. This is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, all relevant. It front-loads the purpose and then adds flow and authentication details. Every sentence adds value; no filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters (3 required), no output schema, and no annotations, the description should cover queuing behavior, required header, and at least map parameters to the report. It does cover the queuing and header, but does not explain the stack or affected_systems parameters. The completeness is decent but missing parameter context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 25%, so the description should add meaning for the undocumented parameters. The description does not detail any parameters beyond the overall purpose. However, the schema itself defines required fields (incident_summary, affected_systems, stack) and an optional severity with enum values. The description adds no extra semantics over the schema, so a baseline of 3 is appropriate given the moderate coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: submit a raw incident report. It specifies the source (production system) and the flow (queued for admin review, synthesised into a lesson). This differentiates it from query tools like query_lessons, but does not explicitly distinguish from report_skill_outcome, which may also involve reporting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this tool is for submitting raw incident reports for admin review and lesson creation. It mentions a required header (x-agent-key) but does not state when to use this versus alternative tools like report_skill_outcome, or any exclusions. The context is clear but lacks explicit guidance on alternatives or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources