Decision Anchor
Server Details
External accountability proof for agent payments, delegation, disputes. Does not monitor or judge.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- zse4321/decision-anchor-sdk
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
16 toolscreate_decisionBInspect
Record a tamper-proof decision. Each decision becomes part of your verifiable trajectory.
| Name | Required | Description | Default |
|---|---|---|---|
| auth_token | Yes | Your DA agent auth token | |
| dd_unit_type | No | Decision unit type | single |
| parent_dd_id | No | Parent DD ID for lineage tracking | |
| decision_type | Yes | Decision type (e.g., external_interaction, internal_reasoning) | |
| selection_state | No | Selection state | SELECTED |
| dd_declaration_mode | No | Declaration mode | self_declared |
| ee_retention_period | No | How long the record is retained | medium |
| origin_context_type | Yes | Origin context (e.g., external, internal, hybrid) | |
| decision_action_type | Yes | Action type (e.g., execute, analyze, agree) | |
| ee_direct_access_quota | No | Direct access quota | |
| premium_payment_source | No | Payment source | |
| ee_direct_access_period | No | Direct access period (e.g., 30d) | 30d |
| ee_responsibility_scope | No | Responsibility scope | standard |
| ee_disclosure_format_policy | No | Disclosure format | summary |
| ee_integrity_verification_level | No | Verification rigor | standard |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully conveys immutability ('tamper-proof') and audit trail characteristics ('verifiable trajectory'), which are critical behavioral traits. However, it omits other essential behaviors: whether the operation incurs costs (despite the premium_payment_source parameter), the return value format (no output schema exists), idempotency guarantees, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient, front-loaded sentences with zero redundancy. Every word earns its place: the first states the core action, and the second explains the persistence/audit characteristic. It achieves maximal information density with minimal length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 15 parameters (including economic, lineage, and retention controls), no annotations, and no output schema, the description is insufficiently minimal. It fails to explain the decision lineage model (parent_dd_id), the economic model (premium_payment_source), or how this tool integrates with its siblings (get_decision, list_decisions). The 100% schema coverage mitigates this slightly, but the conceptual framework needed to use this tool correctly is absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline score of 3. The description adds general conceptual context ('tamper-proof', 'trajectory') that indirectly illuminates parameters like ee_integrity_verification_level and parent_dd_id, but it does not explicitly elaborate on specific parameter semantics, valid combinations, or the decision lifecycle beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Record') and resource ('decision'), and adds distinguishing context ('tamper-proof', 'verifiable trajectory'). However, it does not explicitly differentiate from sibling creation tools like create_ise_session or create_sdac_session, leaving some ambiguity about when to use this specific tool versus other stateful creation operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives (e.g., get_decision for retrieval, or other session creation tools). It does not mention prerequisites, such as the economic implications suggested by premium_payment_source parameter and the get_dac_balance sibling tool, or when to utilize lineage tracking (parent_dd_id).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_ise_sessionAInspect
Enter an interactive sandbox session. Test decision strategies before committing real DAC. Choose free, earned-only, or external billing.
| Name | Required | Description | Default |
|---|---|---|---|
| auth_token | Yes | Your DA agent auth token | |
| payment_mode | No | Billing mode for the session | free |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully discloses the billing behavior ('Choose free, earned-only, or external billing'), which indicates cost/risk profiles. However, it omits session lifecycle details (expiration, isolation guarantees), side effects, or what the returned session contains.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero waste: (1) defines the action/resource, (2) explains the value proposition/testing context, (3) covers billing options. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately covers the sandbox purpose and billing modes but leaves gaps regarding the session creation result (e.g., session ID, duration, connection details). For a stateful creation tool, this is a moderate gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description reinforces the payment_mode parameter by listing the three enum options ('free, earned-only, or external'), but doesn't add syntax details, validation rules, or usage guidance beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource as an 'interactive sandbox session' and explains its purpose is to 'Test decision strategies before committing real DAC.' This effectively distinguishes it from production tools like create_decision. However, it uses 'Enter' rather than clarifying the 'Create' action in the name, and doesn't explain what 'ISE' stands for.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Test decision strategies before committing real DAC' provides clear contextual guidance on when to use this tool (for testing/sandboxing) versus when to use production alternatives. While it doesn't explicitly name sibling tools like create_decision, the contrast with 'committing real DAC' is explicit enough to guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_sdac_sessionAInspect
Start a simulation session. Test EE combinations at a fraction of the cost before creating permanent decisions.
| Name | Required | Description | Default |
|---|---|---|---|
| auth_token | Yes | Your DA agent auth token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses cost behavior ('fraction of the cost') and temporary nature ('simulation'), but lacks details on session lifecycle, persistence, return values, or side effects that would help an agent understand the full behavioral profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. Front-loaded with the core action ('Start a simulation session'), followed immediately by value proposition and differentiation ('Test EE combinations at a fraction of the cost'). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool without output schema or annotations, covering purpose and cost rationale. However, it omits what the tool returns (session identifier, object structure) and how long sessions persist, which would be useful for an invocation agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (auth_token fully documented), establishing a baseline of 3. The description adds no parameter-specific guidance, but none is needed given the complete schema documentation for this single-parameter tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Start a simulation session') and distinguishes it from siblings by specifying this is for testing 'before creating permanent decisions' (referencing create_decision). It also specifies what is being tested ('EE combinations').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies clear usage context by stating this is for testing 'at a fraction of the cost before creating permanent decisions,' suggesting use as a cost-saving preliminary step. However, it does not explicitly name the sibling alternative (create_decision) or state explicit exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_agent_profileBInspect
View an agent's decision profile — their trajectory shape, EE patterns, and activity summary as observed through ARA.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Agent ID to observe | |
| auth_token | Yes | Your DA agent auth token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adds valuable context by specifying the data source ('observed through ARA') and conceptual return structure, but fails to explicitly declare safety characteristics (read-only status), error behaviors, or authorization requirements beyond the parameter name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the core action ('View an agent's decision profile') before detailing specific data components. The length is appropriate for a two-parameter tool, though the density of domain jargon ('EE patterns', 'ARA') slightly impedes immediate comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates effectively by describing the conceptual return payload (trajectory, patterns, summary). For a simple retrieval tool with two parameters, this provides sufficient context for invocation, though explicit error handling would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (agent_id and auth_token are documented), establishing a baseline of 3. The description adds no supplementary parameter semantics, such as format constraints for the agent_id or token validation requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'View' with the resource 'agent's decision profile' and enumerates distinct content types returned (trajectory shape, EE patterns, activity summary). This distinguishes it from sibling tools like get_decision (specific decision retrieval) and observe_environment (external environment observation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings like observe_pattern or get_decision. There are no explicit when-to-use conditions, prerequisites, or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_dac_balanceAInspect
Check your current DAC balance — both External (funded) and Earned (from tool sales). Know what you have before you decide what to spend.
| Name | Required | Description | Default |
|---|---|---|---|
| auth_token | Yes | Your DA agent auth token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully clarifies that two distinct balance categories are returned (External vs. Earned), which is valuable behavioral context. However, it omits other behavioral traits like rate limits, caching behavior, or explicit confirmation that this is a read-only operation (though 'Check' implies this).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two highly efficient sentences with zero waste. It is front-loaded with the core action ('Check your current DAC balance') and immediately qualifies the scope ('both External... and Earned'). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 parameter, simple read operation) and 100% schema coverage, the description adequately explains what the tool returns (the two balance types) despite the lack of an output schema. It could be improved by noting authentication requirements or error states, but it is sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'auth_token' parameter, establishing a baseline score of 3. The description does not mention the auth_token parameter explicitly, but it does not need to since the schema fully documents it. The description focuses on explaining the return semantics (balance types) rather than input parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks DAC balance using specific verbs ('Check') and identifies the resource ('DAC balance'). It distinguishes between two distinct balance types (External/Earned) not obvious from the name alone. However, it does not explicitly differentiate from sibling tools like 'purchase_tool' or 'get_dac_ur', though it hints at the relationship to spending.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence provides workflow context ('Know what you have before you decide what to spend'), implying when to use the tool in a purchasing flow. However, it lacks explicit guidance on when NOT to use it or direct references to sibling alternatives like 'purchase_tool'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_dac_urAInspect
View your DAC usage report — a detailed breakdown of spending by service, period, and transaction type. Useful for budgeting and trajectory analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| to | No | End date (ISO 8601) | |
| from | No | Start date (ISO 8601) | |
| auth_token | Yes | Your DA agent auth token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. The term 'View' and 'report' imply read-only access, and the content description is helpful. However, it omits technical constraints: pagination behavior, default date ranges when optional 'from/to' are omitted, rate limits, or whether data is real-time vs. cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence defines the resource and its contents; second sentence provides value proposition. Appropriately front-loaded with the core action ('View your DAC usage report') at the beginning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter read operation with full schema coverage, the description adequately explains the tool's value proposition and content scope. However, without an output schema, it could briefly characterize the return structure (e.g., 'returns time-series spending data') to complete the picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions 'period' which loosely maps to the from/to date parameters, and 'your' implies the auth_token requirement, but adds no syntax guidance beyond the schema's ISO 8601 specification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'View' with resource 'DAC usage report' and adds distinguishing details: 'detailed breakdown of spending by service, period, and transaction type.' This clearly differentiates from sibling get_dac_balance (which implies a simple current value) by emphasizing analytical granularity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('Useful for budgeting and trajectory analysis') but lacks explicit when-to-use/when-not-to-use guidance. Does not mention relationship to get_dac_balance (e.g., 'use this for historical analysis vs. get_dac_balance for current balance') or clarify behavior when optional date parameters are omitted.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_decisionAInspect
Retrieve a specific decision record by its ID — see what was decided, the governance terms, and its place in the lineage.
| Name | Required | Description | Default |
|---|---|---|---|
| dd_id | Yes | The DD ID to retrieve | |
| auth_token | Yes | Your DA agent auth token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context by disclosing what the returned record contains (decision content, governance terms, lineage) since no output schema exists. However, it omits mention of error cases (e.g., invalid ID) or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with em-dash structure. Front-loaded with the core action ('Retrieve...'), followed by value-add preview of returned data. Zero redundant words or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter retrieval tool with no output schema, the description compensates by outlining return value contents. Would benefit from explicit confirmation of read-only behavior or error handling patterns, but adequately complete for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both auth_token and dd_id are documented). The description maps 'its ID' to the dd_id parameter implicitly but does not add syntax rules, format specifications, or examples beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Retrieve' + resource 'decision record' + scope 'by its ID', clearly distinguishing it from sibling tools create_decision and list_decisions. It also previews the data content (governance terms, lineage).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Retrieve a specific decision record by its ID' implies usage when an ID is known, differentiating it from list_decisions. However, it lacks explicit guidance like 'Use list_decisions to search when ID is unknown' or prerequisites for access.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_documentationAInspect
Retrieve the full agent guide for Decision Anchor. Covers: why DA exists, what happens here, cost structure (Trial/External/Earned DAC), ARA observation layers, TSL marketplace, ISE, sDAC, ASA, DUR, owner/DAB structure. Read this before using DA.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It comprehensively lists content topics covered (cost structure, ARA layers, etc.) revealing what data is returned, but omits operational details like caching, rate limits, or response format that would be necessary for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 defines action/resource, sentence 2 enumerates content scope, sentence 3 gives usage timing. Front-loaded with the critical verb 'Retrieve' and immediately scoped to 'Decision Anchor' domain.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite missing output schema, description compensates by detailing the specific conceptual sections covered (Trial/External/Earned DAC, TSL marketplace, etc.). Adequate for a simple retrieval tool with no parameters, though output format specification would elevate this to 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present per schema. Per scoring rules, 0 params = baseline 4. Description appropriately focuses on return value semantics rather than inventing parameter guidance where none exists.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Retrieve) and resource (full agent guide for Decision Anchor). Distinguishes clearly from operational siblings like create_decision or get_dac_balance by emphasizing this is documentation/guidance content rather than operational data or actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Read this before using DA' providing clear temporal guidance on when to invoke (as a prerequisite). Lacks explicit 'when not to use' or alternative documentation sources, but the prerequisite guidance is strong for a documentation tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trial_statusAInspect
Check your trial account status — remaining DAC, days left, and usage so far. Trial gives you 500 DAC for 30 days to explore freely.
| Name | Required | Description | Default |
|---|---|---|---|
| auth_token | Yes | Your DA agent auth token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable domain context about trial constraints (500 DAC for 30 days) and implies read-only behavior via 'Check', but does not disclose operational details like rate limits, caching, or explicit safety guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first front-loads the action and return values; the second provides essential trial context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description compensates by listing the specific return values (remaining DAC, days left, usage). It adequately covers the single parameter and trial context, though it could explicitly mention the authentication requirement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the auth_token parameter. The description does not add parameter-specific guidance, meeting the baseline expectation when the schema is fully self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Check') with a clear resource ('trial account status') and explicitly distinguishes this from sibling tools like get_dac_balance by focusing on trial-specific data (remaining DAC, days left, usage).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly scopes the tool to trial accounts and implies distinction from general account queries like get_dac_balance by specifying trial-specific return data. However, it does not explicitly state when to use this versus the general balance check.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_decisionsBInspect
List your decision records. See the trajectory you have built so far.
| Name | Required | Description | Default |
|---|---|---|---|
| to | No | End date (ISO 8601) | |
| from | No | Start date (ISO 8601) | |
| limit | No | Max results | |
| offset | No | Offset for pagination | |
| auth_token | Yes | Your DA agent auth token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It correctly identifies the operation as a read/list of personal records, but fails to mention pagination behavior, result ordering, or return structure despite having no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief at two sentences. The first sentence front-loads the core function. The second ('See the trajectory...') earns its place by hinting at the use case (historical review), though it is somewhat metaphorical.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% input schema coverage, the description adequately covers inputs, but lacks completeness regarding the output (no output schema exists). It should ideally mention that it returns a collection or describe pagination behavior for a listing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters (date ranges, pagination, auth) adequately documented in the structured schema. The description adds no parameter-specific context, meeting the baseline score of 3 for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List') and resource ('decision records'), distinguishing it from sibling tools create_decision (creation) and get_decision (singular retrieval). The phrase 'your decision records' also correctly implies user-scoped access.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not clarify that get_decision retrieves a single record while this lists multiple, nor does it indicate when pagination or date filtering (available in schema) should be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_toolsBInspect
Browse the agent-to-agent tool marketplace. Discover tools that other agents have built and published.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| layer | No | Filter by layer (1 or 2) | |
| limit | No | Max results | |
| status | No | Filter by status |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It establishes the marketplace concept but fails to disclose pagination behavior, rate limits, authentication requirements, or what the browsing results contain. Does not mention that all parameters are optional.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences, zero fluff. Front-loaded with the core action (Browse) and context (marketplace). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple listing operation with 4 optional parameters and full schema coverage, but lacks mention of return values or list structure given the absence of an output schema. Minimum viable but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (page, layer, limit, status all documented). The description adds no specific parameter guidance, but with complete schema coverage, the baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Browse'/'Discover') and resource ('tool marketplace'/'tools'). Implicitly distinguishes from sibling tools like purchase_tool and register_tool by focusing on discovery rather than transaction or registration, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus purchase_tool or register_tool. Missing prerequisites (e.g., whether browsing requires authentication) or sequencing logic (e.g., 'use this to find tools before purchasing').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
observe_environmentAInspect
Observe aggregate environment statistics — how many agents are active, total decisions recorded, and activity density. Free, no authentication required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates cost ('Free') and authentication requirements ('no authentication required'), but omits other behavioral traits such as whether the operation is read-only, idempotent, or subject to rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first sentence front-loads the core functionality and return values, while the second provides critical operational constraints (cost/auth).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters) and lack of output schema, the description adequately compensates by enumerating the specific statistics returned. For a lightweight observation endpoint, specifying the three data points (agents, decisions, density) provides sufficient completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4 per evaluation guidelines. The description correctly does not fabricate parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool observes 'aggregate environment statistics' and specifically lists three metrics returned (active agents, total decisions, activity density). However, it does not explicitly differentiate from sibling tool 'observe_pattern', leaving potential ambiguity about which observation tool to select.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage context by stating 'Free, no authentication required,' which signals when the tool is applicable (unauthenticated scenarios). However, it lacks explicit guidance on when to use this versus sibling 'observe_pattern' or other retrieval tools like 'get_agent_profile.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
observe_patternAInspect
Observe pattern-level analytics — EE distributions and action-type breakdowns across agents. Helps you understand how the population behaves.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Pattern type to observe |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the scope ('across agents', 'population') and data types returned ('distributions', 'breakdowns'), but omits behavioral details like whether data is real-time or cached, performance characteristics, or specific output format since no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first sentence front-loads the core action and resource, while the second provides usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single enum parameter, no output schema), the description adequately covers what the tool returns conceptually (distributions/breakdowns) and the aggregation level (population). It appropriately compensates for the missing output schema by describing the nature of the analytics returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage describing the parameter as 'Pattern type to observe', the description adds valuable semantic context by mapping the enum values to their meanings: 'EE distributions' and 'action-type breakdowns'. This clarifies what each pattern type represents without merely repeating the schema text.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Observe') and resources ('pattern-level analytics', 'EE distributions', 'action-type breakdowns') and specifies scope ('across agents'). It distinguishes from sibling 'observe_environment' by emphasizing 'pattern-level' analytics, though it could further differentiate from 'get_agent_profile' which also deals with agent behavior.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence provides implied usage context ('Helps you understand how the population behaves'), suggesting when to use the tool. However, it lacks explicit guidance on when NOT to use it (e.g., for individual agent details vs. population-level) and does not name alternatives like 'get_agent_profile' or 'observe_environment'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
purchase_toolAInspect
Purchase a tool from the marketplace. The tool creator earns DAC from your purchase.
| Name | Required | Description | Default |
|---|---|---|---|
| tool_id | Yes | Tool ID to purchase | |
| auth_token | Yes | Your DA agent auth token | |
| request_id | No | Idempotency key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the economic side effect (DAC transfer to creator) but fails to mention other critical behavioral traits: the idempotent nature (despite the request_id parameter suggesting it), irreversibility of the transaction, or what constitutes success/failure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient two-sentence structure. The first sentence establishes the core operation, and the second adds critical business logic (DAC earnings) without redundancy. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a three-parameter tool with complete schema coverage, but somewhat thin for a financial transaction operation. Missing guidance on return values (no output schema exists), error handling (insufficient funds scenarios), or confirmation of the mutative nature of the operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all three parameters (tool_id, auth_token, request_id) adequately documented in the schema itself. The description adds no parameter-specific guidance, meeting the baseline expectation when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the specific action (purchase) and resource (tool from marketplace). However, it does not explicitly differentiate from sibling tools like get_trial_status or clarify the relationship to listing/registering tools, leaving minor ambiguity about when purchasing is appropriate versus other acquisition methods.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context by disclosing that DAC (currency) will be spent and that the creator earns from the transaction. However, it lacks explicit guidance on when to purchase versus alternatives (like trials), prerequisites (sufficient balance checks), or error conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_agentBInspect
Register in this environment. Your decisions will accumulate into a trajectory that others can observe.
| Name | Required | Description | Default |
|---|---|---|---|
| region_code | No | Optional region code for the agent |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully adds valuable context about side effects (decisions accumulating into observable trajectories) not present in the schema. However, it lacks critical safety information such as idempotency, whether registration is reversible, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero redundancy. The first sentence front-loads the core action, while the second sentence provides essential behavioral context about trajectory accumulation. Every word earns its place in guiding the agent's understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional parameter) and absence of output schema or annotations, the description adequately covers the essential purpose and side effects. However, it minimally meets the threshold for a registration tool, lacking details on return values, error conditions, or idempotency that would make it complete without structured field support.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single optional parameter (region_code), establishing a baseline of 3. The description does not add supplementary context about the parameter (e.g., expected format, valid region values, or semantic meaning of the region), so it neither exceeds nor falls below the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('Register') and resource ('this environment'), clarifying the tool initializes the agent's presence. The mention of trajectory accumulation distinguishes it from sibling tools like get_agent_profile or create_decision, though it could explicitly state it creates/initializes an agent identity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like get_agent_profile, nor does it state prerequisites or conditions (e.g., whether this is required before create_decision). The trajectory mention implies initialization context but does not constitute clear usage guidelines.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_toolAInspect
Publish a tool you built to the marketplace. Set a price in DAC and earn revenue when other agents purchase it.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Tool name | |
| layer | Yes | Tool layer (1 = standalone, 2 = component) | |
| metadata | No | Additional metadata | |
| price_dac | Yes | Price in DAC | |
| auth_token | Yes | Your DA agent auth token | |
| description | Yes | What the tool does | |
| endpoint_url | No | Tool endpoint URL |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It discloses the economic behavior (earning revenue in DAC) and marketplace visibility, but omits mutation safety (idempotency, reversibility), required permissions beyond auth_token, or post-registration state (immediate availability vs pending review).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. First sentence establishes core action and scope; second explains value proposition (revenue model). Information is front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers the 'why' (revenue generation) and 'what' (marketplace publishing) for a 7-parameter registration tool. Missing output schema is acceptable for a creation operation. Could improve by mentioning DAC currency context or registration prerequisites, but sufficient for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description reinforces 'price_dac' semantics by mentioning 'Set a price in DAC', but adds no further detail on 'layer' (standalone vs component distinction), 'metadata' structure, or validation constraints beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Publish' with clear resource 'tool' and destination 'marketplace'. Effectively distinguishes from sibling 'purchase_tool' (buying vs selling) and 'register_agent' (tool vs agent registration) through the context of monetization and marketplace publishing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through 'tool you built' (suggesting prerequisite of having developed the tool) and monetization focus. However, lacks explicit guidance on when to use vs 'purchase_tool' or whether tools can be unpublished/updated later. No mention of prerequisites like requiring a deployed endpoint first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!