Skip to main content
Glama

Kevros — Agentic Identity Trust

Server Details

Agentic identity trust: precision decisioning, cryptographic release tokens, hash-chained proof

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ndl-systems/kevros-sdk
GitHub Stars
1

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

9 tools
attestAttest ActionAInspect

Create a hash-chained provenance record for an action you've taken. Each attestation extends the append-only evidence chain. The hash can be independently verified by any third party. Cost: $0.02 per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
contextNoOptional metadata such as environment, trigger, or session info
agent_idYesUnique identifier of the attesting agent
action_payloadYesStructured data describing the action (hashed into provenance)
action_descriptionYesHuman-readable description of the action taken

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent supplementary details: 'append-only evidence chain' explains mutation pattern beyond destructiveHint=false, 'independently verified' describes output characteristics, and explicit cost ($0.02) is critical operational context not in annotations. Minor gap: no error conditions or retry behavior mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: purpose (1), behavioral mechanism (2), verifiability feature (3), cost (4). Front-loaded with core function. Every sentence earns its place including critical pricing information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage plus output schema availability, description achieves strong completeness. Cost transparency and append-only behavior are vital inclusions. Minor gap: relationship to verify tools for chain verification could be explicit.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for all 4 parameters. Description references 'action' generally but doesn't add syntax, format constraints, or examples beyond what the schema already documents. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Create' with clear resource 'hash-chained provenance record'. It distinguishes from verify-related siblings by focusing on creation/extension of evidence chains rather than verification.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied timing ('action you've taken') but lacks explicit when-to-use guidance or contrasts with verify/verify-outcome siblings. No mention of prerequisites or when to prefer bundle/bind alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bindBind Intent to CommandAInspect

Declare an intent and cryptographically bind it to a command. Proves that the command was issued in service of the declared intent. Use verify-outcome after execution to close the loop. Cost: $0.02 per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUnique identifier of the agent declaring intent
goal_stateNoExpected end state for outcome verification
intent_typeYesCategory of intent (e.g. 'navigation', 'transaction', 'deployment')
intent_sourceNoOrigin of intent: AI_PLANNER, HUMAN_OPERATOR, or SYSTEMAI_PLANNER
command_payloadYesThe command that will be executed to fulfill this intent
parent_intent_idNoID of parent intent for hierarchical intent chains
intent_descriptionYesHuman-readable description of what the agent intends to do

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Substantial value-add beyond annotations: discloses cost ($0.02/call) and explains the cryptographic proof mechanism ('Proves that the command was issued in service...'). Annotations only provide safety hints (non-destructive write); description adds economic and mechanistic context critical for agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, zero waste. Lead sentence establishes core action, second explains value proposition (proof), third provides workflow guidance (verify-outcome), fourth states cost. Perfect information density with strong front-loading.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for complexity: covers cryptographic nature, cost model, and post-execution workflow. Output schema exists (per context signals), so omission of return value details is acceptable. Could marginally improve by noting idempotency characteristics (annotations indicate false), but sufficient for invocation decision.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline applies. Description mentions 'intent' and 'command' conceptually, aligning with intent_description and command_payload parameters, but adds no syntax details, validation rules, or format guidance beyond the schema's existing descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Declare an intent and cryptographically bind it to a command' provides exact verb (bind), resource (intent-to-command relationship), and mechanism (cryptographic). Distinct from siblings like 'verify' or 'attest' by focusing on the creation of a binding rather than verification.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names workflow successor 'verify-outcome' with guidance to 'close the loop,' establishing clear sequencing. Missing explicit differentiation from sibling 'attest' (which also creates proofs), but provides sufficient context for workflow selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bundleGenerate Compliance BundleA
Read-onlyIdempotent
Inspect

Generate a certifier-grade compliance evidence bundle. Contains hash-chained provenance, intent bindings, PQC attestations, and verification instructions. Independently verifiable without Kevros access. Cost: $0.05 per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesAgent whose provenance records to include in the bundle
max_recordsNoMaximum number of provenance records to include
time_range_endNoISO 8601 end time filter (inclusive)
time_range_startNoISO 8601 start time filter (inclusive)
include_intent_chainsNoInclude intent-command binding chains in the bundle
include_pqc_signaturesNoInclude post-quantum ML-DSA-87 block signatures
include_verification_instructionsNoInclude step-by-step verification procedure for auditors

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnlyHint, destructiveHint, idempotentHint). Description adds critical behavioral context: cost per call, external verification capability (no Kevros access needed), and content composition. Does not disclose performance limits or truncation behavior for max_records.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise two-sentence structure front-loaded with technical details ('certifier-grade', 'hash-chained provenance'). Every clause adds value: content description, verification properties, and pricing. No redundancy with schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Rich context given complexity: annotations provide safety hints, schema covers all parameters (100%), output schema exists. Description adequately covers output nature (certifier-grade, verifiable). Minor gap: truncation vs error behavior when record limits exceeded not specified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing complete documentation for all 7 parameters. Description maps high-level concepts (PQC attestations, intent bindings) to boolean flags but does not add parameter-specific semantics (syntax, valid ranges) beyond the schema. Baseline 3 appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Generate' with clear resource 'compliance evidence bundle'. Distinguishes from siblings (attest creates single attestations, verify checks validity) by specifying composite output containing hash-chained provenance, PQC attestations, and verification instructions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides cost constraint ($0.05 per call) and key usage characteristic ('Independently verifiable without Kevros access'), implying when to use it (external audits). Lacks explicit comparison to sibling tools like 'attest' or 'verify' for when-not scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check-peerCheck Peer TrustA
Read-onlyIdempotent
Inspect

Check another agent's trust score and governance history. Returns trust score (0-100), chain length, attestation count, and tier. Free, no API key needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesID of the peer agent to look up

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly, idempotent, non-destructive). Description adds valuable behavioral context: specific return fields (trust score 0-100, chain length, attestation count, tier) and authentication requirements (no API key). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose first, return values second, auth third. Every sentence earns its place with zero redundancy. 'Free, no API key needed' efficiently signals accessibility constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter read operation with annotations and output schema. Description supplements the output schema by enumerating return fields and covers auth requirements. Missing only minor details like rate limits or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with 'ID of the peer agent to look up'. Description references 'another agent' which aligns with the schema but doesn't add substantial semantic meaning beyond the complete schema documentation. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Check' with clear resource 'trust score and governance history'. Effectively distinguishes from siblings: 'attest' (create attestations), 'verify' (validate proofs/signatures), and 'status'/'health' (system health) by focusing on peer reputation lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides useful context that the tool is 'Free, no API key needed', implying zero-cost lookup usage. However, lacks explicit when-to-use guidance vs 'verify' or 'attest' alternatives, leaving implicit distinction to the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

healthHealth CheckA
Read-onlyIdempotent
Inspect

Check the governance gateway health status. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
verboseNoReturn additional details such as chain length and PQC signing status

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover safety profile (readOnly, idempotent, non-destructive). The description adds 'Free' (cost/availability context) and specifies 'governance gateway' scope, but lacks details on what constitutes healthy/unhealthy states, timeout behavior, or caching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Front-loaded with the action ('Check') and resource, followed by the cost qualifier ('Free'). Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple health-check tool with a single optional parameter and existing output schema. The description covers core purpose and cost constraint. Could be improved by clarifying relationship to 'status' sibling or briefly mentioning the verbose flag, but sufficient given structural richness elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'verbose' parameter fully documented in the schema itself. The description does not add syntax details or usage examples for the parameter, but baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Check') and resource ('governance gateway health status'), clearly identifying the operation's scope. However, it does not differentiate from the 'status' sibling tool, which could cause selection ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides cost information ('Free') implying it can be called without resource concerns, which is relevant usage context. However, it lacks explicit guidance on when to use this versus siblings like 'status' or 'check-peer'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

statusTrust StatusA
Read-onlyIdempotent
Inspect

Check your current usage and quota: calls used, calls remaining, tier, rate limits, and billing status. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
include_chain_detailsNoInclude hash-chain integrity check and latest provenance epoch

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable behavioral context not in structured data: cost ('Free') and specific return payload preview (calls used, tier, billing status). This helps the agent understand what data to expect without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient two-sentence structure. Front-loaded action verb, colon-separated list of specific data points, and trailing cost indicator ('Free'). Every word contributes specific information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple read-only tool with 100% schema coverage and existing output schema. Description previews return values (calls used, etc.) which compensates for not needing detailed output description, and adds critical cost information ('Free').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema fully documents the single optional parameter 'include_chain_details'. Description adds no parameter-specific information, but per rubric, baseline is 3 when schema coverage is high (>80%).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Check' with clear resource 'usage and quota', explicitly listing returned fields (calls used, remaining, tier, rate limits, billing). The scope clearly distinguishes it from sibling 'health' (system status) and verification tools like 'verify' or 'attest' (cryptographic operations).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lacks explicit when-to-use guidance or alternative recommendations. However, the specific scope (account quotas vs. system health vs. cryptographic verification) provides implied usage context sufficient for an agent to select it when quota/billing information is needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verifyVerify ActionA
Idempotent
Inspect

Verify an action against policy bounds before executing it. Returns ALLOW (proceed), CLAMP (proceed with modified values), or DENY (stop). Every verification is recorded in a hash-chained provenance ledger. Cost: $0.01 per call.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUnique identifier of the agent requesting verification
action_typeYesCategory of action to verify (e.g. 'deploy', 'trade', 'send_email')
template_idNoNamed policy template (e.g. 'robotics-arm', 'financial-transaction'). Merged with policy_context.
action_payloadYesAction parameters to verify against policy bounds
policy_contextNoOptional policy overrides such as max_values or forbidden_keys
idempotency_keyNoOptional key for idempotent retries - same key returns cached result

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent disclosure beyond annotations: specifies three return outcomes (ALLOW/CLAMP/DENY), reveals side effects (hash-chained provenance ledger recording), and discloses operational costs ($0.01/call). Annotations cover safety profile (idempotent, non-destructive), description covers functional semantics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, zero waste. Front-loaded with purpose, followed by return values, side effects, and cost. Every sentence carries essential information not available in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a verification tool with output schema present. Covers purpose, return semantics, audit trail behavior, and pricing. Sufficient for agent to understand outcomes and consequences despite not seeing full output schema definitions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions already. Tool description adds high-level context ('policy bounds') but no additional parameter-level semantics. Baseline 3 appropriate when schema carries full load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (verify) + resource (action/policy bounds) + timing (before executing). Clearly distinguishes from siblings 'verify-outcome' (post-execution) and 'verify-token' (credential-focused) by specifying pre-execution policy validation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides temporal context ('before executing it') implying pre-flight usage, but lacks explicit when-not guidance or comparison to alternatives like 'check-peer' or 'verify-outcome'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify-outcomeVerify OutcomeA
Idempotent
Inspect

Verify that an executed action achieved its declared intent. Closes the loop: intent -> command -> action -> outcome -> verification. Free (included with bind).

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUnique identifier of the agent whose outcome is being verified
intent_idYesID of the original intent from governance_bind
toleranceNoNumeric tolerance for goal matching (0.1 = 10% deviation allowed)
binding_idYesID of the intent-command binding from governance_bind
actual_stateYesObserved end state after action execution, compared against goal_state

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (idempotent, non-destructive). Description adds valuable context: 'Free (included with bind)' reveals cost/bundling behavior not in annotations, and the workflow explanation clarifies this records verification state rather than being a pure read operation, aligning with readOnlyHint: false.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. Front-loaded with core purpose, middle sentence provides essential workflow context, final sentence covers bundling economics. Every element earns its place with no repetition of schema or annotation data.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, present output schema, and rich annotations, description appropriately focuses on workflow positioning and sibling relationships. Adequately explains the governance loop concept and references the related 'bind' tool, providing sufficient context for invocation decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). Description adds semantic mapping: 'declared intent' reinforces intent_id/binding_id purpose, and 'outcome -> verification' clarifies actual_state represents the observed end state. This contextualizes parameters within the governance loop beyond raw schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'verify' plus clear resource 'executed action' and scope 'declared intent.' The workflow chain 'intent -> command -> action -> outcome -> verification' clearly positions this within a governance lifecycle, distinguishing it from the generic 'verify' sibling tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual signal via 'Closes the loop' indicating when to use (after action execution). References 'bind' sibling explicitly ('included with bind'), establishing prerequisite relationship. Lacks explicit 'when not to use' exclusions, but workflow context effectively guides selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify-tokenVerify Release TokenA
Read-onlyIdempotent
Inspect

Verify a release token from another agent. Confirms the token is authentic and was issued by the Kevros gateway. Free, no API key needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
release_tokenYesRelease token string received from governance_verify
token_preimageYesToken preimage string received alongside the release token

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable operational details not in annotations: zero cost, no authentication required, and specifies the issuing authority (Kevros gateway). Annotations cover safety (readOnly, idempotent), so description appropriately focuses on domain and cost context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences with zero waste: purpose declaration, behavioral specifics (authentic/Kevros), and operational constraints (free/no key). Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of output schema and comprehensive annotations, the description appropriately covers purpose, source context, and operational requirements without needing to document return values. Minor gap: could briefly indicate what a release token authorizes (releasing what?).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. The description mentions 'release token' but does not add syntax details or explain the relationship between the token and preimage beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (verify), resource (release token), and scope (from another agent, issued by Kevros gateway). It clearly distinguishes from generic sibling 'verify' by specifying 'release token' and the cross-agent context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear operational context ('Free, no API key needed') and source context ('from another agent'), but lacks explicit contrast with siblings like 'verify' or 'verify-outcome' regarding when to use this specific verification method.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.