Skip to main content
Glama

Ambr

Server Details

Ricardian contracts for AI agents — dual-format, SHA-256 bound, legible by construction.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose with no overlap: handshake initiates a contract action, create_contract generates a new contract, get_contract retrieves full contract data, get_contract_status checks status and amendments, list_templates shows available templates, and verify_hash confirms document integrity. The descriptions clearly differentiate their functions, preventing agent misselection.

Naming Consistency5/5

All tools follow a consistent 'ambr_verb_noun' naming pattern (e.g., ambr_create_contract, ambr_get_contract_status). This uniformity makes the tool set predictable and easy to navigate, with no deviations in style or structure across the six tools.

Tool Count5/5

Six tools are well-scoped for a contract management server, covering core operations like creation, retrieval, status checking, template listing, handshake initiation, and integrity verification. Each tool serves a clear purpose without redundancy, making the count appropriate for the domain.

Completeness4/5

The tool set provides strong coverage for contract lifecycle management, including create, get, status, and verification, with templates and handshake support. A minor gap exists in update or delete operations for contracts, but agents can work around this through amendments or status changes, and the core workflows are well-supported.

Available Tools

6 tools
ambr_agent_handshakeA
Idempotent
Inspect

Initiate a handshake on a contract on behalf of your delegating principal.

Requires an API key with an active delegation (principal wallet registered via /api/v1/delegations). Records the agent's intent to accept, reject, or request changes on the contract. The principal must separately approve via wallet signature on the Reader Portal.

Args:

  • contract_id (string, required): Contract ID (amb-YYYY-NNNN), SHA-256 hash, or UUID

  • intent (string, required): "accept" | "reject" | "request_changes"

  • message (string, optional): Note for the counterparty

  • visibility_preference (string, optional): "private" | "metadata_only" | "public" | "encrypted"

Returns: Handshake status and next steps for principal approval.

Legibility: the handshake itself is auditable — delegation scope, agent identity, and principal approval are recorded alongside the contract hash.

ParametersJSON Schema
NameRequiredDescriptionDefault
intentYesHandshake intent
messageNoOptional note for counterparty
contract_idYesContract ID, SHA-256 hash, or UUID
visibility_preferenceNoOptional visibility preference for negotiation
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate non-readOnly, non-destructive, and idempotent, which the description aligns with by describing a handshake initiation (non-read) that is auditable and requires approval. The description adds valuable context beyond annotations: it clarifies that the action records intent, requires principal approval, and is auditable with delegation scope and identity. However, it doesn't detail rate limits or error behaviors, keeping it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, requirements, args, returns, legibility) and front-loaded key information. It avoids redundancy, but the Args section slightly repeats schema details, and the legibility note could be more integrated. Overall, it's efficient with most sentences earning their place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involves delegation, approval, and auditing) and lack of output schema, the description is mostly complete. It covers purpose, prerequisites, parameters, returns, and legibility. However, it doesn't fully explain the return values ('Handshake status and next steps') or potential errors, leaving minor gaps for an agent to infer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal semantic value beyond the schema—it briefly mentions parameters in the Args section but doesn't provide additional context like examples or edge cases. This meets the baseline of 3 since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Initiate a handshake on a contract') and distinguishes it from siblings like ambr_create_contract (creates contracts) or ambr_get_contract (reads contracts). It specifies acting 'on behalf of your delegating principal' and recording 'intent to accept, reject, or request changes,' making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Requires an API key with an active delegation' and 'The principal must separately approve via wallet signature.' It also implies when not to use it by distinguishing from siblings—e.g., use ambr_get_contract for reading, not this for handshake initiation. This provides clear context and prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ambr_create_contractAInspect

Generate a Ricardian Contract from a template.

Creates a dual-format contract (human-readable legal text + machine-parsable JSON) using AI, linked by SHA-256 hash. The contract is stored on Ambr and accessible via the Reader Portal.

Requires a valid API key (X-API-Key header on the HTTP request) with available credits. Use ambr_list_templates first to discover templates and their required parameters.

Args:

  • template (string, required): Template slug (e.g. "c1-agent-delegation")

  • parameters (object, required): Template-specific parameters matching the schema

  • principal_declaration (object, required): { agent_id, principal_name, principal_type }

  • parent_contract_hash (string, optional): SHA-256 hash of parent contract for amendments

  • amendment_type (string, optional): "original" | "amendment" | "extension"

Returns:

  • contract_id: Unique ID (e.g. "amb-2026-0042")

  • sha256_hash: SHA-256 hash for verification

  • status: Contract status

  • reader_url: URL to view in Reader Portal

  • credits_remaining: Remaining API credits

Legibility: Output is dual-format by construction and replayable to the original SHA-256 hash — the basis of Ambr's legibility guarantee.

ParametersJSON Schema
NameRequiredDescriptionDefault
templateYesTemplate slug from ambr_list_templates
parametersYesTemplate-specific parameters
amendment_typeNo
parent_contract_hashNoSHA-256 hash of parent contract (for amendments)
principal_declarationYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the dual-format nature (human-readable + JSON), storage location (Ambr/Reader Portal), authentication requirements (API key with credits), and the legibility guarantee. While annotations cover basic safety (non-destructive, non-idempotent), the description provides richer operational context without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear sections: purpose statement, behavioral context, prerequisites, usage guidance, parameter explanations, return values, and legibility guarantee. Every sentence adds value with zero wasted words, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex contract-creation tool with 5 parameters, nested objects, and no output schema, the description provides comprehensive context: it explains the tool's purpose, behavioral characteristics, authentication requirements, parameter semantics, return values, and sibling tool relationships. The legibility guarantee section adds important architectural context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 60% schema description coverage, the description compensates well by explaining the purpose of key parameters: template slug comes from ambr_list_templates, parameters are template-specific, and principal_declaration structure is clarified. It also explains optional parameters' purposes (parent_contract_hash for amendments, amendment_type options), adding meaningful context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Generate a Ricardian Contract from a template') and distinguishes it from siblings by mentioning dual-format output and storage on Ambr. It explicitly names a sibling tool (ambr_list_templates) for discovering templates, showing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use ambr_list_templates first to discover templates and their required parameters.' It also mentions prerequisites (API key with credits) and distinguishes from sibling tools by specifying its unique contract-creation function.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ambr_get_contractA
Read-onlyIdempotent
Inspect

Retrieve a contract by ID, SHA-256 hash, or UUID.

With a valid API key (contract creator): returns the full contract including human-readable text, machine-readable JSON, status, and principal declaration. Without authentication: returns metadata only (contract_id, status, hash, dates).

Supports three lookup formats:

  • Contract ID: "amb-2026-0042"

  • SHA-256 hash: 64-character hex string

  • UUID: Standard UUID format

Args:

  • id (string, required): Contract ID, SHA-256 hash, or UUID

Returns: Full contract (if authorized) or metadata-only response.

Legibility: retrieval preserves the dual-format pairing — prose and JSON always replay to the same SHA-256.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesContract ID (amb-YYYY-NNNN), SHA-256 hash, or UUID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, but the description adds valuable context beyond that: it explains the authentication-dependent response (full contract vs. metadata-only), mentions the dual-format pairing (prose and JSON), and notes that retrieval preserves hash consistency. This enhances understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose. Each sentence adds value: authentication behavior, lookup formats, parameter explanation, return details, and legibility note. There's no wasted text, and it efficiently conveys necessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (authentication-dependent responses, multiple lookup formats) and lack of output schema, the description does a good job explaining what to expect. It covers return types (full contract vs. metadata) and behavioral notes like hash preservation. A slight gap is the absence of explicit error handling or rate limit mentions, but it's largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already fully documents the 'id' parameter. The description repeats the same information about ID formats without adding new syntax or format details beyond what's in the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve a contract') and distinguishes it from siblings like ambr_get_contract_status (which only gets status) and ambr_create_contract (which creates rather than retrieves). It explicitly lists the three lookup formats, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use it (to retrieve contracts by ID, hash, or UUID) and implies alternatives by mentioning authentication-dependent behavior. However, it doesn't explicitly state when to use ambr_get_contract_status instead, which would be helpful for sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ambr_get_contract_statusA
Read-onlyIdempotent
Inspect

Check the status of a contract and its amendment chain.

Returns the current status and any linked amendments (parent or child contracts). Useful for verifying if a contract is active, amended, or terminated.

Args:

  • id (string, required): Contract ID, SHA-256 hash, or UUID

Returns:

  • contract_id, status, created_at

  • amendment_type, parent_contract_hash

  • amendments: Array of child contracts (if any)

Legibility: amendments are bilateral and themselves dual-format — the chain stays legible from original through every revision.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesContract ID (amb-YYYY-NNNN), SHA-256 hash, or UUID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), but the description adds valuable context beyond this: it explains that amendments are 'bilateral and themselves dual-format' and that 'the chain stays legible from original through every revision,' which clarifies how amendment data is structured and maintained. This enhances understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized, with a clear purpose statement upfront, followed by usage context, parameter details, return values, and additional legibility notes. Most sentences earn their place by adding useful information, though the 'Legibility' sentence could be slightly more concise. Overall, it is efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema), the description is fairly complete: it covers purpose, usage, parameters, return values, and amendment behavior. However, it lacks details on error handling or specific status values (e.g., what 'active' means), which could be helpful. With annotations providing safety info, it is mostly adequate but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents the single parameter 'id' with its description. The description repeats the parameter info in the 'Args' section but does not add significant meaning beyond what the schema provides, such as examples or edge cases. With high schema coverage, the baseline score of 3 is appropriate as the description adds minimal extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('check the status', 'returns') and resources ('contract and its amendment chain'), distinguishing it from siblings like ambr_get_contract (which likely retrieves full details) and ambr_verify_hash (which focuses on hash verification). It explicitly mentions what it returns, making the purpose distinct and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('useful for verifying if a contract is active, amended, or terminated'), but it does not explicitly state when not to use it or name alternatives among siblings. While it implies differentiation from other tools by focusing on status and amendments, it lacks explicit exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ambr_list_templatesA
Read-onlyIdempotent
Inspect

List available contract templates on Ambr.

Returns all active Ricardian Contract templates with their slugs, names, descriptions, categories, parameter schemas, and pricing. Use this to discover which templates are available before creating a contract with ambr_create_contract.

No authentication required.

Returns: Array of template objects with slug, name, description, category, parameter_schema, price_cents, and version fields.

Legibility: templates are the parameter schema for the dual-format contracts you create — starting here keeps your request conformant and your output defensible.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key traits (read-only, non-destructive, idempotent, closed-world), but the description adds valuable context beyond that: it states 'No authentication required' (auth needs) and explains the return format in detail. It also provides conceptual context about templates being 'parameter schema for dual-format contracts' and benefits like 'keeps your request conformant'. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by usage guidance, authentication info, return details, and conceptual notes. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema), the description is complete: it covers purpose, usage, behavioral traits (including auth), return format, and conceptual rationale. With annotations providing structured safety info, the description fills all necessary gaps for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is high. The description compensates by explaining the absence of parameters implicitly (it's a simple list operation) and adds semantic context about what the tool does without inputs, which is appropriate for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('available contract templates on Ambr'), specifying it returns 'all active Ricardian Contract templates' with detailed fields. It explicitly distinguishes from sibling tools by mentioning 'before creating a contract with ambr_create_contract', making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use this to discover which templates are available before creating a contract with ambr_create_contract.' This directly names the alternative sibling tool and gives a clear context for usage, with no misleading or missing exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ambr_verify_hashA
Read-onlyIdempotent
Inspect

Verify a contract's SHA-256 hash to confirm document integrity.

Checks whether the provided hash matches a contract stored on Ambr. Returns verification status, contract metadata, and Reader Portal URL if found.

Args:

  • hash (string, required): SHA-256 hash (64-character hex string)

Returns:

  • verified: boolean

  • contract_id: string (if found)

  • status: string (if found)

  • reader_url: string (if found)

Legibility: verification is the point at which legibility becomes provable — matching hash means the prose a human reads and the JSON a machine parses are the same document that was originally signed.

ParametersJSON Schema
NameRequiredDescriptionDefault
hashYesSHA-256 hash to verify (64-character hex)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable context about what verification means ('matching hash means the prose a human reads and the JSON a machine parses are the same document that was originally signed') and explains the significance of the operation beyond the basic safety profile indicated by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, args, returns, conceptual note) and appropriately sized. The 'Legibility' paragraph adds conceptual value but could be considered slightly extraneous for pure tool selection. Most sentences earn their place in explaining the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter verification tool with comprehensive annotations and no output schema, the description provides good completeness. It explains what verification accomplishes, documents the return structure, and adds conceptual context. The main gap is the lack of an output schema, but the description compensates by documenting return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the schema already documenting the single required hash parameter as 'SHA-256 hash to verify (64-character hex)'. The description repeats this information in the Args section but doesn't add meaningful semantic context beyond what the schema provides. Baseline 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('verify a contract's SHA-256 hash') and resource ('contract stored on Ambr'), distinguishing it from siblings like create_contract or get_contract. It explicitly mentions document integrity verification, which is a distinct purpose from other contract operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to confirm document integrity'), but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools. The context is sufficient for understanding the primary use case without explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources