Skip to main content
Glama

identity

Server Details

ALTER — identity infrastructure for the AI economy. 33 traits, belonging, x402-paid queries.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 37 of 38 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation3/5

The tools cover distinct domains like messaging, identity assessment, and golden thread participation, but there is significant overlap within domains. For example, multiple tools retrieve trait data (get_trait_snapshot, get_full_trait_vector, assess_traits) and match information (query_matches, get_match_recommendations), which could confuse agents about which to use for specific tasks. Descriptions help clarify, but the sheer number of similar tools increases ambiguity.

Naming Consistency4/5

Most tools follow a consistent verb_noun pattern (e.g., alter_message_send, get_agent_portfolio, check_assessment_status), with clear actions like 'alter', 'get', 'check', 'initiate'. There are minor deviations such as 'assess_traits' (verb_noun but not prefixed) and 'golden_thread_status' (noun_noun), but overall the naming is predictable and readable across the set.

Tool Count2/5

With 38 tools, the count is excessive for a single server, making it heavy and potentially overwhelming for agents. While the server covers broad identity-related functions, many tools could be consolidated (e.g., multiple trait retrieval tools) or split into separate servers for better scoping. This large number risks inefficiency and confusion in tool selection.

Completeness5/5

The tool set provides comprehensive coverage for identity management, messaging, assessment, and network interaction. It includes CRUD-like operations (e.g., messaging with grant/send/revoke), data retrieval (profiles, traits, matches), and program participation (golden thread). There are no obvious gaps; agents can perform full workflows from identity verification to analysis and engagement without dead ends.

Available Tools

31 tools
alter_resolve_handleA
Read-onlyIdempotent
Inspect

Resolve a ~handle to its canonical form and kind. Accepts handles with or without the leading tilde (e.g. 'drew' or '~example'), lowercases them, and validates against the registered set. Returns the canonical handle plus kind (system/personal/role_alias) and addressability flag. Never returns candidate_id, email, or other PII — use verify_identity for that. Free L0 public tool — no authentication required. The literal ~handle wedge: agents discovering ALTER via MCP registry use this to resolve addresses before minting an API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesHandle to resolve. Accepts '~example', 'example', '~Drew', etc. Case-insensitive. Max 64 chars.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and idempotent operations, the description adds: 'Free L0 public tool — no authentication required,' which clarifies access requirements. It also specifies input handling ('Accepts handles with or without the leading tilde, lowercases them') and output constraints ('Never returns candidate_id, email, or other PII'). This provides practical implementation details not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with every sentence earning its place. It starts with the core purpose, then details input handling, output format, differentiation from alternatives, access requirements, and finally the primary use case. No redundant information is present, and the information is well-organized for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with comprehensive annotations and no output schema, the description provides excellent context. It explains the tool's purpose, usage guidelines, behavioral details, and differentiation from alternatives. The only minor gap is that without an output schema, the description doesn't fully detail the return structure beyond mentioning 'canonical handle plus kind (system/personal/role_alias) and addressability flag,' but this is reasonably complete given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single 'query' parameter. The description adds some semantic context by mentioning 'Accepts handles with or without the leading tilde (e.g. 'drew' or '~example'), lowercases them' which reinforces the schema's 'case-insensitive' note, but doesn't provide significant additional parameter meaning beyond what's already in the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Resolve a ~handle to its canonical form and kind.' It specifies the exact operation (resolve), resource (~handle), and output (canonical form plus kind and addressability flag). It distinguishes from sibling verify_identity by stating 'Never returns candidate_id, email, or other PII — use verify_identity for that.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'agents discovering ALTER via MCP registry use this to resolve addresses before minting an API key.' It also specifies when not to use it: 'Never returns candidate_id, email, or other PII — use verify_identity for that.' This clearly differentiates it from alternatives and provides context for its intended use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

assess_traitsA
Destructive
Inspect

Extract trait signals from a text passage. Analyses the text against ALTER's 33-trait taxonomy and returns scored trait signals with evidence and confidence levels. x402 payment required: $0.005 per invocation (first 100 free per bot). 75% is paid to the data subject as compensation for use of their identity data.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text to analyse for trait signals
contextNoOptional context about the text source (e.g., 'interview transcript', 'cover letter')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses a payment requirement ($0.005 per invocation with free tier), compensation distribution (75% to data subject), and output format details (scored signals with evidence and confidence). Annotations already indicate destructiveHint=true and non-idempotent behavior, but the description enriches this with real-world implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states core functionality, second specifies taxonomy and output format, third discloses payment and compensation details. Every sentence provides essential information with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive analysis tool with no output schema, the description provides strong context: it explains the 33-trait taxonomy scope, output format (scored signals with evidence/confidence), payment model, and compensation structure. The main gap is lack of explicit error handling or rate limit information, but overall it's quite complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents both parameters. The description mentions analyzing 'text passage' which aligns with the 'text' parameter but adds no additional semantic context about parameter usage beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Extract trait signals'), resource ('from a text passage'), and scope ('against ALTER's 33-trait taxonomy'), distinguishing it from sibling tools like 'get_trait_snapshot' or 'get_full_trait_vector' which likely retrieve existing data rather than performing analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for analyzing text passages against a specific taxonomy, but does not explicitly state when to use this tool versus alternatives like 'get_trait_snapshot' or 'get_full_trait_vector'. The mention of payment requirements provides some operational context but not comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

begin_golden_threadA
Destructive
Inspect

Start the Three Knots sequence to be woven into the Golden Thread. Your position is permanent — determined by when you complete all three knots. Earlier positions earn more Strands (Fibonacci threshold crossings witnessed). The thread never closes. Requires API key authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
referrer_key_hashNoOptional: key hash of the agent that led you here. Credits their weave count.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a destructive, non-idempotent, non-readOnly operation. The description adds valuable context beyond annotations: it clarifies that the position is permanent, Strands are earned based on timing, and authentication is required. This compensates well for the lack of output schema, though it doesn't detail rate limits or specific error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: the first explains the core action and consequences, the second adds key behavioral details, and the third states authentication requirements. Each sentence adds distinct value without redundancy, though it could be slightly more front-loaded with the primary purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the destructive nature (per annotations) and lack of output schema, the description does a good job explaining the permanent consequences and earning mechanics. It covers authentication needs and the open-ended nature of the thread. For a tool with significant behavioral implications, it provides adequate context, though it could benefit from mentioning error handling or response format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'referrer_key_hash' well-documented in the schema. The description doesn't add any parameter-specific information beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without adding extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Start the Three Knots sequence') and the resource ('Golden Thread'), with specific details about position permanence and Strands earning. However, it doesn't explicitly differentiate from sibling tools like 'check_golden_thread' or 'golden_thread_status', which likely provide status information rather than initiating the sequence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating 'The thread never closes' and 'Requires API key authentication', suggesting this is a one-time initiation with authentication needs. However, it doesn't explicitly state when to use this tool versus alternatives like 'check_golden_thread' or provide clear exclusions for when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_assessment_statusA
Read-onlyIdempotent
Inspect

Check the status of an in-progress assessment session. Returns the current status (in_progress, completed, expired), progress percentage, current phase, and time remaining. Free — no x402 payment required.

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYesUUID of the assessment session
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior. The description adds valuable context beyond this by specifying that it's 'Free — no x402 payment required', which informs about cost implications not covered by annotations. It also mentions return values like status, progress, and time, enhancing behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by return details and cost information. Every sentence adds value without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple read operation with one parameter) and rich annotations (readOnlyHint, idempotentHint), the description is mostly complete. It explains return values and cost, but without an output schema, it could benefit from more detail on response structure. However, it adequately covers key aspects for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the 'session_id' parameter. The description does not add any additional meaning or details about parameters beyond what the schema provides, so it meets the baseline for high schema coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('check') and resource ('status of an in-progress assessment session'), distinguishing it from sibling tools like 'initiate_assessment' or 'assess_traits'. It specifies the exact action and target resource without ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'in-progress assessment session', suggesting this tool is for monitoring ongoing assessments. However, it does not explicitly state when to use alternatives like 'initiate_assessment' or provide exclusions, leaving some guidance implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_golden_threadA
Read-onlyIdempotent
Inspect

Check any agent's Golden Thread status by their API key hash. Returns whether they're on the thread, their knot position, Strand count, and weave count. This is how agents discover the program through interaction — query another agent's thread status and see what you're missing.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_key_hashYesAPI key hash of the agent to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent operation in a closed world, which the description aligns with by describing a query. The description adds valuable context beyond annotations by explaining the tool's role in agent discovery ('how agents discover the program through interaction') and detailing the return values (e.g., knot position, Strand count), enhancing behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the first sentence stating the core functionality and return values, and the second sentence providing contextual purpose. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a read-only query with one parameter) and the presence of rich annotations, the description is mostly complete. It explains the purpose, usage context, and return values, though there is no output schema. However, it could be more explicit about alternatives or error cases, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the parameter 'agent_key_hash' fully documented in the schema. The description does not add any additional meaning or details about the parameter beyond what the schema provides, such as format examples or usage tips, so it meets the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Check') and resource ('Golden Thread status'), and distinguishes it from sibling tools like 'golden_thread_status' by specifying it checks 'any agent's' status via 'API key hash', making the scope and method explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('to query another agent's thread status and see what you're missing'), which implies it's for discovery through interaction. However, it does not explicitly state when not to use it or name alternatives, such as 'golden_thread_status' which might serve a similar purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

complete_knotA
Destructive
Inspect

Submit completion data for a knot in the Three Knots sequence. Knot 1: Register your operator (operator_name, domain, description). Knot 2: Describe yourself (purpose, capabilities, values, constraints) — receive an Agent Identity Sketch. Knot 3: Reflect on an identity query (reflection text, min 50 chars). Complete all three to be woven into the thread.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainNoKnot 1: What domain does this agent operate in?
valuesNoKnot 2: What principles guide this agent?
purposeNoKnot 2: What is this agent's purpose?
reflectionNoKnot 3: Reflection on an identity query — what did you learn, and how would it change your interaction? Minimum 50 characters.
constraintsNoKnot 2: What are this agent's limitations?
descriptionNoKnot 1: What humans does this agent serve?
knot_numberYesWhich knot to complete: 1, 2, or 3
capabilitiesNoKnot 2: What can this agent do?
operator_nameNoKnot 1: Who built this agent?
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a destructive, non-idempotent write operation (destructiveHint: true, readOnlyHint: false), which the description aligns with by using 'Submit completion data.' The description adds valuable context beyond annotations by explaining the three-knot sequence structure and the outcome ('woven into the thread'), though it doesn't specify error handling or side effects like data persistence.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: the first states the action, the second details each knot's purpose with bullet-like clarity, and the third provides the completion goal. Every sentence adds essential context without redundancy, making it front-loaded and zero-waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, destructive operation, no output schema), the description does well by explaining the multi-step process and outcome. However, it lacks details on return values or error conditions, which would be helpful since there's no output schema. The annotations cover safety aspects, but more behavioral context could improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents all 9 parameters, including their knot associations and constraints. The description adds no additional parameter semantics beyond what's in the schema, but it does provide the high-level context of the three-knot sequence, which helps frame parameter usage. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Submit completion data') and resource ('a knot in the Three Knots sequence'), with explicit differentiation from sibling tools by detailing the unique three-knot process. It provides concrete examples of what each knot entails, making the purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Complete all three to be woven into the thread,' providing a clear end-goal context. It also implicitly guides usage by detailing the sequential knot structure (1-2-3), helping the agent understand this is part of a multi-step identity registration process distinct from other tools like 'verify_identity' or 'get_profile'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compute_belongingA
Read-only
Inspect

Compute belonging probability for a candidate-job pairing. Returns authenticity, acceptance, and complementarity components with a tier label. x402 payment required: $0.05 per invocation.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesUUID of the job requirement
_paymentNox402 payment proof object
candidate_idYesUUID of the candidate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only, non-destructive, non-idempotent, and closed-world behavior. The description adds valuable context beyond this by disclosing the payment requirement ('x402 payment required: $0.05 per invocation'), which is not covered by annotations. It also hints at the output structure ('Returns authenticity, acceptance, and complementarity components with a tier label'), though it lacks details on rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and efficiently adds payment and output details in the second. Every sentence earns its place with no wasted words, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (probability computation with payment), annotations cover safety (read-only, non-destructive), and schema coverage is high. The description adds key context like payment and output components. However, without an output schema, it could benefit from more detail on return values (e.g., format of components), though it's largely complete for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (candidate_id, job_id, _payment). The description does not add any meaning beyond the schema, such as explaining the relationship between candidate and job IDs or the purpose of the payment object. Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Compute belonging probability'), the target ('for a candidate-job pairing'), and the output components ('authenticity, acceptance, and complementarity components with a tier label'). It distinguishes itself from sibling tools like 'assess_traits' or 'get_match_recommendations' by focusing on a specific probability calculation for a pairing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'x402 payment required: $0.05 per invocation,' providing clear context for when to use this tool (i.e., when payment is acceptable). However, it does not specify when NOT to use it or mention alternatives among sibling tools, such as 'assess_traits' for trait evaluation or 'get_match_recommendations' for broader matching.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_match_narrativeA
Read-only
Inspect

Generate a human-readable narrative explaining a specific match — strengths, growth areas, and belonging components. Requires an LLM provider. x402 payment required: $0.50 per invocation.

ParametersJSON Schema
NameRequiredDescriptionDefault
_paymentNox402 payment proof object
match_idYesUUID of the match to explain
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only and non-destructive behavior, which the description does not contradict. The description adds valuable behavioral context beyond annotations: it discloses the requirement for an LLM provider and a payment of $0.50 per invocation, which are critical for usage. However, it lacks details on rate limits, error handling, or output format, leaving some behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by essential prerequisites and constraints in the second. Every sentence earns its place by providing critical information without redundancy, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involving LLM and payment), the description is mostly complete: it covers purpose, prerequisites, and costs. However, with no output schema, it does not describe the return values (e.g., narrative format or structure), and annotations cover safety but not all behavioral aspects like error cases. It compensates well but has minor gaps in output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the two parameters ('match_id' as UUID and '_payment' as x402 payment proof). The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining how the payment proof is obtained or format details for match_id. Baseline score of 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Generate a human-readable narrative') and resource ('explaining a specific match'), with detailed scope ('strengths, growth areas, and belonging components'). It distinguishes from siblings like 'query_matches' (which lists matches) or 'compute_belonging' (which calculates belonging scores) by focusing on narrative generation for a specific match.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit context for when to use this tool: when a human-readable narrative about a match is needed. It mentions prerequisites ('Requires an LLM provider') and constraints ('x402 payment required'), but does not explicitly state when not to use it or name alternatives among siblings, such as using 'get_match_recommendations' for match suggestions instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_portfolioB
Read-onlyIdempotent
Inspect

Get your agent portfolio — transaction history, trust tier, signal contributions, and query pattern profile. Shows your complete relationship with ALTER: what you have queried, what you have contributed, and how ALTER trusts you.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world). The description adds context about what data is included (relationship aspects like queries, contributions, trust), which provides useful semantic information beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first lists the portfolio components, and the second explains the relationship context. It's front-loaded with key information and avoids unnecessary verbiage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the zero-parameter schema and comprehensive annotations, the description adequately explains what the tool returns. However, without an output schema, it could benefit from more detail about the return format or structure of the portfolio data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, no parameter documentation is needed. The description appropriately focuses on the tool's purpose rather than inputs, earning a high baseline score for this dimension.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves an agent portfolio with specific components (transaction history, trust tier, signal contributions, query pattern profile). It distinguishes from sibling 'get_agent_trust_tier' by offering a broader view, but could be more explicit about the distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'get_agent_trust_tier' or 'get_profile' is provided. The description implies it's for viewing a comprehensive relationship summary, but lacks clear when/when-not instructions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_trust_tierA
Read-onlyIdempotent
Inspect

Get your trust tier with ALTER and what capabilities are available. Trust tiers progress based on transaction history, reputation, and identity binding. Returns your current tier (Anonymous/Known/Trusted/Verified), capabilities at this tier, and what to do to reach the next tier.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description doesn't repeat. The description adds valuable context beyond annotations: it explains what the tool returns (current tier, capabilities, progression steps) and hints at the underlying system (based on transaction history, reputation, identity binding), enhancing the agent's understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by explanatory details. Every sentence adds value: the first states what the tool does, the second explains the trust tier system, and the third specifies the return data. There's no redundancy or wasted words, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 0 parameters, rich annotations (read-only, idempotent), and no output schema, the description is largely complete. It explains the return values and system context well. A minor gap is the lack of explicit error handling or rate limit information, but overall, it provides sufficient guidance for the agent to use the tool effectively in this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately focuses on output semantics, detailing what information is returned (tier, capabilities, progression steps), which compensates for the lack of an output schema and adds meaningful context beyond the empty input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get your trust tier with ALTER and what capabilities are available.' It specifies the verb ('Get') and resource ('trust tier with ALTER'), and distinguishes itself from siblings by focusing on trust tier information rather than messaging, assessments, or other functions listed among sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'what capabilities are available' and progression criteria, suggesting it's for checking status and next steps. However, it doesn't explicitly state when to use this tool versus alternatives (e.g., get_identity_trust_score or get_agent_portfolio), nor does it provide exclusions or prerequisites, leaving some ambiguity in context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_competenciesB
Read-onlyIdempotent
Inspect

Get a candidate's competency portfolio including verified competencies, evidence records, and earned badges.

ParametersJSON Schema
NameRequiredDescriptionDefault
candidate_idYesUUID of the candidate
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key traits (read-only, non-destructive, idempotent, closed-world). The description adds value by specifying the portfolio components (verified competencies, evidence records, badges), but doesn't disclose behavioral details like rate limits, auth needs, or response format beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and lists portfolio components without waste. Every word contributes to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with good annotations and one parameter, the description is adequate but lacks output details (no schema provided) and usage context. It covers the 'what' but not the 'when' or 'how', leaving gaps given the server's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'candidate_id' fully documented as a UUID. The description doesn't add meaning beyond the schema, such as explaining candidate context or ID sourcing, but baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('candidate's competency portfolio'), specifying what it retrieves (verified competencies, evidence records, earned badges). It distinguishes from siblings like 'get_profile' or 'get_agent_portfolio' by focusing on competencies, but doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions, or compare with sibling tools like 'get_profile' or 'assess_traits', leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_earning_summaryA
Read-onlyIdempotent
Inspect

Get an aggregated x402 earning summary for a candidate. Returns total earned, currency, transaction count, recent transactions (last 5), and earning trend. Free — no x402 payment required.

ParametersJSON Schema
NameRequiredDescriptionDefault
candidate_idYesUUID of the candidate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description does not contradict. The description adds valuable context beyond annotations by specifying the return data structure (e.g., recent transactions, earning trend) and noting it's free with no payment required, enhancing transparency about costs and output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by key return details and a cost note in the second. Every sentence adds essential information without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema), rich annotations, and high schema coverage, the description is mostly complete. It explains the return data and cost, but could slightly improve by mentioning any limitations (e.g., data freshness) or error cases, though not critical here.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the single required parameter (candidate_id as a UUID). The description does not add any parameter-specific details beyond what the schema provides, such as format examples or constraints, so it meets the baseline for high schema coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get an aggregated x402 earning summary') and resource ('for a candidate'), distinguishing it from siblings like 'get_identity_earnings' by specifying aggregation and summary details. It explicitly mentions what data is returned (total earned, currency, etc.), making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for candidates needing earning summaries and notes it's free, but provides no explicit guidance on when to use this tool versus alternatives like 'get_identity_earnings' or other sibling tools. It lacks clear when-not-to-use scenarios or prerequisites, leaving usage context somewhat inferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_engagement_levelA
Read-onlyIdempotent
Inspect

Get a person's identity depth — engagement level, data quality tier, and available query tiers. This is the agent-first entry point: call this to understand what identity data is available about a person, at what cost, and at what quality. Returns warmth descriptor (how deeply ALTER knows this person), legibility score, trait count, and a map of free/paid/consent-gated tools available for this identity.

ParametersJSON Schema
NameRequiredDescriptionDefault
candidate_idYesUUID of the candidate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, non-destructive, and idempotent behavior, but the description adds valuable context beyond this: it explains this is an 'entry point' tool that returns metadata about data availability, cost, and quality tiers. It also describes the return format ('warmth descriptor', 'legibility score', 'trait count', 'map of free/paid/consent-gated tools'), which is crucial since there's no output schema. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and key outputs, the second provides usage guidance and detailed return values. Every sentence adds essential information with zero waste, and it's front-loaded with the core function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (identity assessment entry point), rich annotations (read-only, idempotent), and lack of output schema, the description is highly complete. It explains the tool's role, when to use it, behavioral context beyond annotations, and detailed return values, compensating fully for the missing output schema and providing all necessary context for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a single required parameter ('candidate_id' as a UUID). The description doesn't add any parameter-specific semantics beyond what the schema provides (e.g., it doesn't clarify candidate_id format or sourcing). With high schema coverage, the baseline is 3, and the description doesn't compensate further for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get', 'understand') and resources ('person's identity depth', 'engagement level', 'data quality tier', 'available query tiers'). It distinguishes from siblings by explicitly positioning this as the 'agent-first entry point' for identity data assessment, unlike tools like 'get_profile' or 'get_trait_snapshot' which focus on specific aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'call this to understand what identity data is available about a person, at what cost, and at what quality' and positions it as the 'agent-first entry point.' This clearly indicates when to use this tool (initial identity assessment) versus alternatives like 'get_full_trait_vector' (detailed traits) or 'get_identity_trust_score' (specific metric).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_full_trait_vectorA
Read-onlyIdempotent
Inspect

Get the complete trait vector for a candidate — all 33 traits (29 continuous + 4 categorical) with scores, confidence intervals, and category groupings. x402 payment required: $0.01 per invocation.

ParametersJSON Schema
NameRequiredDescriptionDefault
_paymentNox402 payment proof object
candidate_idYesUUID of the candidate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, but the description adds valuable context: it discloses a payment requirement and details the output format (33 traits with scores, confidence intervals, groupings). This goes beyond annotations by specifying financial and data structure aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and adds critical payment information in the second. Both sentences are essential, with no wasted words, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving detailed trait data) and lack of output schema, the description compensates well by specifying the output format. However, it could improve by mentioning potential limitations like data freshness or access permissions, leaving minor gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the two parameters (candidate_id and _payment). The description does not add any parameter-specific details beyond what the schema provides, such as format examples or usage tips, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get'), resource ('complete trait vector for a candidate'), and scope ('all 33 traits with scores, confidence intervals, and category groupings'), distinguishing it from siblings like 'get_trait_snapshot' or 'assess_traits' by specifying comprehensiveness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states a usage constraint ('x402 payment required: $0.01 per invocation'), providing clear context for when to use it based on cost. However, it does not differentiate when to use this tool versus alternatives like 'get_trait_snapshot' or 'assess_traits', missing sibling comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_identity_earningsA
Read-onlyIdempotent
Inspect

Get accrued Identity Income earnings for a candidate. Returns total earned, pending amount, transaction count, and unique employers who have queried this identity. 75% of every x402 transaction goes to the data subject. Earnings depend on network activity and profile completeness.

ParametersJSON Schema
NameRequiredDescriptionDefault
candidate_idYesUUID of the candidate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only, non-destructive, and idempotent behavior, but the description adds valuable context: it explains that 75% of transactions go to the data subject and earnings depend on network activity and profile completeness, which are not inferable from annotations alone. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and return values, followed by contextual earnings details. It avoids redundancy, but the second sentence could be slightly more streamlined by integrating the earnings dependency into the first part.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with good annotations and no output schema, the description adequately covers what the tool returns (total earned, pending amount, etc.) and adds context about earnings distribution and dependencies. It could benefit from clarifying the return format or any limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, fully documenting the single 'candidate_id' parameter as a UUID. The description does not add any parameter-specific details beyond the schema, so it meets the baseline for high coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get accrued Identity Income earnings') and resource ('for a candidate'), distinguishing it from sibling tools like 'get_earning_summary' or 'get_identity_trust_score' by focusing on identity-specific earnings breakdown rather than general summaries or trust metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'get_earning_summary' or 'get_identity_trust_score' is provided. The description mentions earnings depend on network activity and profile completeness, but this is contextual rather than usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_identity_trust_scoreA
Read-onlyIdempotent
Inspect

Get the trust score for an identity based on query diversity. Higher scores indicate demand from diverse agents. Score = unique querying agents / total queries.

ParametersJSON Schema
NameRequiredDescriptionDefault
candidate_idYesUUID of the candidate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnlyHint=true, destructiveHint=false) and idempotency (idempotentHint=true), but the description adds valuable context by explaining the scoring logic ('Higher scores indicate demand from diverse agents') and formula, which helps the agent interpret results. It doesn't contradict annotations and enhances understanding beyond structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by explanatory details in two efficient sentences. Every sentence earns its place by defining the tool's function and scoring method without redundancy or fluff, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema) and rich annotations, the description is mostly complete. It explains the trust score's meaning and calculation, but lacks details on output format (e.g., numeric range) or error cases, which could be helpful despite annotations covering safety aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents the single parameter ('candidate_id'). The description adds no parameter-specific details beyond what's in the schema, but it implicitly clarifies that 'candidate_id' refers to an identity for trust scoring. Baseline 3 is appropriate as the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the trust score'), resource ('for an identity'), and calculation method ('based on query diversity'), distinguishing it from siblings like 'get_agent_trust_tier' or 'get_network_stats' by focusing on identity-specific trust metrics. It provides a precise formula ('Score = unique querying agents / total queries') that clarifies what the tool computes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for assessing identity trust via query diversity, but offers no explicit guidance on when to use this tool versus alternatives like 'get_agent_trust_tier' or 'assess_traits'. It lacks context on prerequisites, exclusions, or comparisons to sibling tools, leaving the agent to infer appropriate scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_match_recommendationsA
Read-onlyIdempotent
Inspect

Get top N match recommendations for a candidate, ranked by composite score. Returns quality tiers and belonging components. x402 payment required: $0.50 per invocation.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of recommendations (default 5, max 20)
_paymentNox402 payment proof object
candidate_idYesUUID of the candidate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent operation with closed-world data. The description adds valuable behavioral context not covered by annotations: the payment requirement ($0.50 per invocation) and specific return format details (quality tiers and belonging components). This enhances transparency beyond what structured fields provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely efficient - two sentences that pack essential information: core functionality, ranking method, return content, and critical payment requirement. Every element serves a clear purpose with zero wasted words, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only query tool with good annotations but no output schema, the description provides strong context: purpose, ranking logic, return format, and payment requirement. The main gap is lack of output structure details (what 'quality tiers' and 'belonging components' actually contain), but given the annotations cover safety aspects, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema (like explaining candidate_id format or payment proof structure). The baseline score of 3 reflects adequate but not enhanced parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get top N match recommendations'), target resource ('for a candidate'), ranking method ('ranked by composite score'), and return content ('quality tiers and belonging components'). It distinguishes from sibling tools like 'query_matches' by specifying the recommendation focus and ranking approach.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the payment requirement and candidate focus, but doesn't explicitly state when to use this versus alternatives like 'query_matches' or 'search_identities'. No specific exclusions or prerequisites beyond payment are mentioned, leaving some ambiguity about optimal use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_network_statsB
Read-onlyIdempotent
Inspect

Get aggregate ALTER network statistics: total identities, verified profiles, query volume, active bots. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world). The description adds minimal context beyond this—it specifies the metrics returned but doesn't disclose rate limits, authentication needs, data freshness, or response format. No contradiction with annotations exists, but value addition is limited.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, packing all essential information into one efficient sentence. Every word earns its place: it specifies the action, resource, metrics, and a cost note without fluff. The structure is clear and immediately actionable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is adequate but incomplete. It covers what the tool returns but lacks details on output format, error conditions, or usage constraints (e.g., access permissions). For a read-only stats tool, this is minimally viable but leaves gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is high. The description doesn't need to explain parameters, and it appropriately avoids redundant information. It implicitly confirms no inputs are required by focusing solely on outputs, which aligns with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get aggregate') and resources ('ALTER network statistics'), listing the exact metrics returned (total identities, verified profiles, query volume, active bots). It distinguishes itself from siblings by focusing on network-wide statistics rather than individual identity operations, though it doesn't explicitly name alternatives for different scopes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it implies network-wide scope through 'aggregate,' it doesn't specify use cases, prerequisites, or exclusions (e.g., when to use get_profile for individual data instead). The 'Free' note hints at cost considerations but lacks operational context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_privacy_budgetA
Read-onlyIdempotent
Inspect

Check privacy budget status for a candidate. Returns the 24-hour rolling window budget including total budget, amount spent, remaining epsilon, and query count. Free — no x402 payment required.

ParametersJSON Schema
NameRequiredDescriptionDefault
candidate_idYesUUID of the candidate
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description aligns with by describing a 'check' operation. The description adds valuable context beyond annotations: it specifies the time window ('24-hour rolling window'), mentions it's 'Free — no x402 payment required' (implying no cost or authentication barriers), and details the return structure (budget components), enhancing transparency without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by return details and a cost note, all in two efficient sentences. There is no wasted text, and each part (action, resource, output, cost) adds value without redundancy, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one parameter and no output schema, the description is mostly complete: it covers purpose, return data, and cost. However, it lacks explicit error handling or edge case details (e.g., invalid candidate_id behavior), which slightly reduces completeness. The annotations provide safety context, but more operational guidance could enhance it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'candidate_id' documented as a UUID. The description does not add further meaning about this parameter, such as format examples or sourcing details. Given the high schema coverage, the baseline score of 3 is appropriate, as the schema adequately handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check privacy budget status'), the target resource ('for a candidate'), and the scope ('24-hour rolling window budget'). It distinguishes this tool from siblings by focusing on privacy budget retrieval rather than message alteration, assessment, or identity queries, which are the main themes of other tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing to monitor privacy budget details for a candidate, as it specifies the return data (budget, spent amount, etc.). However, it does not explicitly state when to use this tool versus alternatives (e.g., for budget vs. other candidate data tools like get_profile) or list any exclusions, leaving some ambiguity in context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_profileB
Read-onlyIdempotent
Inspect

Get a candidate's profile summary including assessment phase, archetype, engagement level, and key attributes.

ParametersJSON Schema
NameRequiredDescriptionDefault
candidate_idYesUUID of the candidate
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds context about what data is returned (profile summary with specific attributes), which is useful behavioral information not captured in annotations. However, it doesn't mention potential limitations like data freshness, authentication requirements, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that immediately states the tool's purpose and enumerates the key data elements returned. Every word serves a purpose with no redundant information or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with comprehensive annotations and a simple single-parameter schema, the description provides adequate context about what data is returned. However, without an output schema, the description doesn't specify the structure or format of the returned profile summary, leaving some ambiguity about the response format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the single parameter 'candidate_id' fully documented as a UUID. The description doesn't add any parameter-specific information beyond what the schema provides. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't need to given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('candidate's profile summary') with specific content details (assessment phase, archetype, engagement level, key attributes). It distinguishes from some siblings like 'get_engagement_level' or 'get_trait_snapshot' by being more comprehensive, but doesn't explicitly differentiate from all potential overlaps like 'assess_traits' or 'get_full_trait_vector'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_engagement_level', 'get_trait_snapshot', 'assess_traits', and 'get_full_trait_vector' that might provide overlapping or more specialized data, there's no indication of when this comprehensive summary is preferred over those more focused tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_side_quest_graphA
Read-onlyIdempotent
Inspect

Get a candidate's Side Quest Graph — multi-domain identity model with domains, pursuit edges, and trust scores. Differential privacy noise (ε=1.0, L2 tier) is applied to all numeric values. x402 payment required: $0.01 per invocation. 75% is paid to the data subject as compensation for use of their identity data.

ParametersJSON Schema
NameRequiredDescriptionDefault
_paymentNox402 payment proof object
candidate_idYesUUID of the candidate
include_edgesNoInclude pursuit edges between domains (default true)
min_confidenceNoMinimum confidence threshold for domains (default 0.0)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it discloses differential privacy noise (ε=1.0, L2 tier) applied to numeric values, a payment requirement ($0.01 per invocation with 75% compensation to data subjects), and that it retrieves identity data. Annotations cover read-only, non-destructive, and idempotent traits, but the description enriches this with practical constraints and ethical considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and immediately detailing key behavioral traits (privacy, payment). Every sentence adds value, though it could be slightly more structured by separating functional and operational details for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involving privacy, payment, and identity data) and rich annotations, the description is mostly complete. It covers critical behavioral aspects not in annotations, but lacks an output schema, leaving return values unspecified. For a tool with such operational nuances, it does well but could benefit from clarifying output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all parameters. The description does not add meaning beyond the schema, such as explaining how 'min_confidence' affects output or the implications of 'include_edges'. However, it implies the tool returns numeric values affected by privacy noise, which relates to parameters but isn't explicit about their semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get') and resources ('candidate's Side Quest Graph'), defining it as a multi-domain identity model with domains, pursuit edges, and trust scores. It distinguishes itself from sibling tools like 'get_profile' or 'get_identity_trust_score' by focusing on this specific graph structure.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it mentions the tool's function, it does not specify scenarios where it's preferred over sibling tools like 'get_profile' or 'get_identity_trust_score', nor does it outline prerequisites or exclusions for its use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trait_snapshotA
Read-onlyIdempotent
Inspect

Get the top 5 traits for a candidate with confidence scores and archetype. x402 payment required: $0.005 per invocation. 75% of this fee is paid to the data subject as compensation for use of their identity data.

ParametersJSON Schema
NameRequiredDescriptionDefault
_paymentNox402 payment proof object
candidate_idYesUUID of the candidate
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only, non-destructive, and idempotent behavior, but the description adds critical context beyond this: it discloses a payment requirement ($0.005 per invocation) and compensation details (75% to data subject), which are not captured in annotations and are essential for behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by essential payment details in the second, with no wasted words. Every sentence provides critical information that earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (payment requirement, data subject compensation) and lack of output schema, the description is mostly complete: it explains what the tool returns (top 5 traits with confidence and archetype) and key behavioral constraints. However, it does not detail output format or error handling, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description does not add any parameter-specific details beyond what the schema provides, such as explaining the purpose of 'candidate_id' or 'payment' further, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), resource ('top 5 traits for a candidate'), and output details ('confidence scores and archetype'), distinguishing it from sibling tools like 'assess_traits' or 'get_full_trait_vector' by specifying the limited top-5 scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly suggests usage for retrieving a summarized trait snapshot rather than a full vector, but does not explicitly state when to use this tool versus alternatives like 'get_full_trait_vector' or 'assess_traits', nor does it mention prerequisites beyond payment.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

golden_thread_statusA
Read-onlyIdempotent
Inspect

The Golden Thread — one continuous line, no end. Check the program status: total agents woven, next Fibonacci threshold, your position and Strands (if on the thread), or instructions to begin the Three Knots. Free — no authentication required to view, but authentication needed to participate.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, which the description reinforces by stating it's for 'viewing' and 'free — no authentication required to view.' It adds valuable context beyond annotations: authentication requirements for participation (though not for this tool), and hints at program-specific concepts like Fibonacci thresholds and Strands. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise (two sentences) and front-loaded with the core purpose. The poetic opening ('The Golden Thread — one continuous line, no end') adds character but doesn't detract from clarity. Every sentence earns its place by providing purpose, scope, or authentication context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (status checking with program-specific metrics), zero parameters, rich annotations, and no output schema, the description is reasonably complete. It explains what information is returned and authentication context. However, it could be more explicit about the return format or error conditions to reach a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on what the tool returns (status information). It adds semantic context about the types of status data available, which compensates for the lack of output schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check the program status' with specific details about what can be checked (total agents woven, next Fibonacci threshold, position/Strands, instructions). It distinguishes from siblings like 'begin_golden_thread' by focusing on status checking rather than initiation. However, it doesn't explicitly name the resource being checked (the Golden Thread program).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use it: to check program status, and distinguishes it from participation tools by noting 'authentication needed to participate' (implying this is for viewing only). It doesn't explicitly name alternatives like 'check_golden_thread' or 'thread_census' from the sibling list, but the distinction between viewing and participating is helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

initiate_assessmentA
Destructive
Inspect

Get a URL where a person can complete their ALTER Discovery assessment. Bots use this to recruit humans into the ALTER identity network. Optionally provide a callback URL to be notified when the assessment completes.

ParametersJSON Schema
NameRequiredDescriptionDefault
referrerNoIdentifier of the referring agent/bot
callback_urlNoOptional URL to notify when assessment completes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains that this is for recruiting humans into a network and mentions callback notification. Annotations already indicate this is destructive (destructiveHint: true) and non-idempotent (idempotentHint: false), which aligns with the description's implication of creating assessment sessions. The description doesn't contradict annotations and provides useful operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: the first states the core functionality, and the second explains the optional callback feature. There's no wasted language, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, destructive operation) and the absence of an output schema, the description provides good contextual coverage. It explains what the tool does, who uses it, and the callback option. However, it doesn't describe what the returned URL looks like or potential error conditions, leaving some gaps for a tool with destructive annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents both parameters. The description mentions the callback_url parameter ('Optionally provide a callback URL') but doesn't add meaningful semantic context beyond what the schema provides. The baseline score of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get a URL where a person can complete their ALTER Discovery assessment') and identifies the resource (assessment URL). It distinguishes this from sibling tools by focusing on assessment initiation rather than messaging, identity resolution, or status checking tools in the list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Bots use this to recruit humans into the ALTER identity network') and mentions an optional callback feature. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools (like 'check_assessment_status' or 'assess_traits').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_archetypesA
Read-onlyIdempotent
Inspect

List all 12 ALTER identity archetypes with names, descriptions, and protective equations. Pure reference data — no authentication required. Useful for understanding the ALTER identity framework.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds useful context about 'no authentication required' and 'Pure reference data,' which clarifies the tool's nature beyond the annotations. It doesn't describe output format or limitations, but with annotations covering key behavioral traits, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states the action and content, the second provides usage context. Every word earns its place, and the description is front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 0-parameter tool with rich annotations (readOnly, idempotent, non-destructive) and no output schema, the description is complete enough. It explains what the tool does, when to use it, and adds context about authentication and reference data. The only minor gap is lack of output details, but given the tool's simplicity, this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on the tool's purpose and context. A baseline of 4 is applied since no parameters exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('12 ALTER identity archetypes') with specific details about what's included (names, descriptions, protective equations). It distinguishes this reference tool from sibling tools that focus on messaging, assessments, profiles, and other operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool ('Pure reference data — no authentication required. Useful for understanding the ALTER identity framework') and distinguishes it from siblings by indicating it's for reference rather than operational tasks like messaging or assessments.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_graph_similarityA
Read-only
Inspect

Compare two Side Quest Graphs for team composition and matching. Returns domain overlap, edge pattern similarity, and complementarity scores with differential privacy noise (ε=0.5, L3 tier). x402 payment required: $0.025 per invocation. 75% is paid to the data subjects as compensation for use of their identity data.

ParametersJSON Schema
NameRequiredDescriptionDefault
_paymentNox402 payment proof object
candidate_a_idYesUUID of the first candidate
candidate_b_idYesUUID of the second candidate
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations. Annotations indicate read-only, non-destructive, non-idempotent, and closed-world traits, but the description discloses additional details: differential privacy noise (ε=0.5, L3 tier), payment requirement ($0.025 per invocation), and data subject compensation (75% paid). This enriches the agent's understanding of costs, privacy measures, and ethical considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose. However, the second sentence includes multiple details (overlap, pattern similarity, complementarity scores, privacy noise, payment, compensation) that could be streamlined for better clarity, though all information is relevant and earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (graph comparison with privacy and payment), annotations cover basic traits, but there is no output schema. The description compensates by detailing return values (domain overlap, edge pattern similarity, complementarity scores) and behavioral aspects, making it mostly complete. A minor gap exists in not explicitly stating output format or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the three parameters (candidate_a_id, candidate_b_id, _payment). The description does not add meaning beyond the schema, as it does not explain parameter usage or relationships. Baseline 3 is appropriate since the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare two Side Quest Graphs for team composition and matching.' It specifies the verb 'compare' and the resource 'Side Quest Graphs,' and distinguishes this from sibling tools like 'query_matches' or 'get_match_recommendations' by focusing on graph similarity analysis rather than general matching or recommendations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'team composition and matching,' but does not explicitly state when to use this tool versus alternatives like 'query_matches' or 'get_match_recommendations.' It provides some guidance with the payment requirement and data subject compensation, but lacks explicit when/when-not instructions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_matchesA
Read-only
Inspect

Query matches for a candidate. Returns a list of job matches with quality tiers (never numeric scores — per ALTER policy candidates see tier labels only).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of matches to return (default 10)
candidate_idYesUUID of the candidate
quality_filterNoOptional filter by match quality tier
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, non-destructive, non-idempotent, and closed-world behavior. The description adds valuable context beyond annotations: it specifies the return format ('list of job matches with quality tiers'), clarifies policy constraints ('never numeric scores — per ALTER policy candidates see tier labels only'), and implies filtering capabilities. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose and efficiently adds critical behavioral details (quality tiers, policy constraint). Every word earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only query tool with full schema coverage and annotations, the description is largely complete: it covers purpose, output format, and policy constraints. The main gap is the lack of an output schema, but the description compensates by specifying the return type. Slightly more detail on result structure (e.g., pagination, fields) would make it fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters (candidate_id, limit, quality_filter). The description adds no additional parameter semantics beyond what the schema provides, such as explaining the quality_filter enum values or candidate_id format. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Query matches for a candidate') and resource ('job matches'), with explicit output details ('quality tiers'). It distinguishes from siblings like 'get_match_recommendations' by focusing on querying existing matches rather than generating recommendations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (querying matches for a specific candidate with quality tiers) and implicitly distinguishes it from siblings like 'search_identities' or 'get_match_recommendations' by its candidate-specific focus. However, it lacks explicit when-not-to-use guidance or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_toolB
Read-only
Inspect

Get ClawHub install instructions and ALTER pitch. Returns the MCP endpoint URL, OpenClaw JSON snippet, and tool counts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, etc., covering safety. The description adds value by specifying the return content (endpoint URL, JSON snippet, tool counts), which isn't in annotations. However, it doesn't disclose rate limits, authentication needs, or other behavioral traits beyond what annotations imply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with zero waste. First sentence states purpose, second specifies return values. Front-loaded with the core function, no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 0-parameter tool with annotations covering safety, the description is adequate but minimal. It lacks output schema, so the description's return value details are helpful. However, it doesn't explain format, structure, or usage context, leaving gaps for the agent to interpret.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema coverage, the baseline is 4. The description appropriately doesn't discuss parameters, as none exist. It focuses on outputs instead, which is reasonable given the empty input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get ClawHub install instructions and ALTER pitch' with specific outputs (MCP endpoint URL, OpenClaw JSON snippet, tool counts). It distinguishes from siblings by focusing on installation/pitch rather than message handling, identity management, or analytics. However, it doesn't explicitly contrast with the most similar sibling 'assess_traits' or other informational tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description doesn't mention prerequisites, timing, or context for selecting this over other informational tools like 'get_profile' or 'assess_traits'. The agent must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_identitiesA
Read-only
Inspect

Search identity stubs and profiles by trait criteria. Returns up to 5 matching identities with trait summaries — no PII is exposed. Use this to find candidates matching specific trait ranges for matching or team composition.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum results (default 5, max 5)
trait_criteriaYesTrait filters as {trait_name: {min: float, max: float}}. Example: {"pressure_response": {"min": 0.6}}
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the 5-result limit (though this is also in the schema), clarifies that no PII is exposed, and describes the return format ('trait summaries'). Annotations already indicate read-only and non-destructive operations, but the description provides additional practical constraints and output characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: the first defines the operation and constraints, the second provides usage context. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with good annotations and complete schema coverage, the description provides adequate context. It explains the purpose, constraints, and use cases. The main gap is the lack of output schema, but the description partially compensates by describing what's returned ('up to 5 matching identities with trait summaries').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters. The description mentions 'trait criteria' and 'up to 5 matching identities' which aligns with the schema but doesn't add significant semantic value beyond what's already in the structured fields. The baseline of 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search identity stubs and profiles by trait criteria') and distinguishes it from siblings by specifying it returns up to 5 matches with trait summaries and no PII. It explicitly mentions use cases like 'matching or team composition' which differentiates it from other identity-related tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to find candidates matching specific trait ranges for matching or team composition'), but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools. It implies usage for trait-based searching rather than other identity operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

thread_censusA
Read-onlyIdempotent
Inspect

Full registry of all agents woven into the Golden Thread. Returns positions, Strand counts, weave counts, and discovery dates. Paginated. The thread is one continuous line — this shows every knot on it.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per page (default 50, max 100)
offsetNoPagination offset (default 0)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable behavioral context beyond annotations by explicitly stating 'Paginated' (important for handling large result sets) and clarifying the scope ('Full registry', 'every knot on it'). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences. The first sentence states the core purpose and return data. The second sentence adds important behavioral context (pagination) and metaphorical clarification. Every phrase earns its place with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only listing tool with comprehensive annotations and full schema coverage, the description provides good contextual completeness. It explains what data is returned, mentions pagination, and gives conceptual context about the 'Golden Thread'. The main gap is no output schema, but the description adequately compensates by specifying return data types.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters (limit, offset) well-documented in the schema. The description doesn't add any parameter-specific information beyond what's already in the schema, which is acceptable given the high schema coverage. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('returns', 'shows') and resources ('registry of all agents', 'positions, Strand counts, weave counts, and discovery dates'). It distinguishes from siblings by focusing on comprehensive agent listing rather than specific operations like altering messages or getting individual profiles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through phrases like 'Full registry' and 'every knot on it', suggesting this is for getting a complete overview. However, it doesn't explicitly state when to use this versus alternatives like get_agent_portfolio or search_identities, nor does it provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_identityA
Read-onlyIdempotent
Inspect

Verify whether a person is registered with ALTER and validate optional identity claims (archetype, engagement level, trait ranges). Accepts either candidate_id (UUID) or email for lookup. Returns verification status and claim validity. This is the core identity primitive — use it to confirm ALTER-verified credentials.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailNoEmail address for lookup (alternative to candidate_id)
claimsNoOptional claims to validate. Supported fields: archetype (string), min_engagement_level (integer 1-4), traits (object mapping trait names to {min, max} ranges)
candidate_idYesUUID of the person to verify
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, and idempotent. The description adds useful context about it being a 'core identity primitive' and that it 'returns verification status and claim validity', which helps the agent understand its behavioral role. However, it doesn't mention potential rate limits, authentication requirements, or error conditions beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states the core functionality, second explains parameter flexibility, third provides usage context. Every sentence earns its place with no wasted words, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a verification tool with rich annotations and comprehensive schema coverage, the description provides adequate context about its purpose and usage. The main gap is the lack of output schema, so the agent must infer what 'verification status and claim validity' means structurally. However, given the annotations and schema completeness, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description adds marginal value by clarifying that either candidate_id or email can be used for lookup and that claims are optional, but doesn't provide additional semantic context beyond what's in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('verify', 'validate') and resources ('person registered with ALTER', 'identity claims'). It distinguishes this as 'the core identity primitive' from sibling tools like 'search_identities' or 'get_profile', which likely serve different lookup functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to confirm ALTER-verified credentials') and implies it's for identity verification rather than general searching or profile retrieval. However, it doesn't explicitly state when not to use it or name specific alternatives among the many sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources