Ambr
Server Details
Ricardian contracts for AI agents — dual-format, SHA-256 bound, legible by construction.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 6 of 6 tools scored.
Each tool has a distinct purpose with no overlap: handshake initiates a contract action, create_contract generates a new contract, get_contract retrieves full contract data, get_contract_status checks status and amendments, list_templates shows available templates, and verify_hash confirms document integrity. The descriptions clearly differentiate their functions, preventing agent misselection.
All tools follow a consistent 'ambr_verb_noun' naming pattern (e.g., ambr_create_contract, ambr_get_contract_status). This uniformity makes the tool set predictable and easy to navigate, with no deviations in style or structure across the six tools.
Six tools are well-scoped for a contract management server, covering core operations like creation, retrieval, status checking, template listing, handshake initiation, and integrity verification. Each tool serves a clear purpose without redundancy, making the count appropriate for the domain.
The tool set provides strong coverage for contract lifecycle management, including create, get, status, and verification, with templates and handshake support. A minor gap exists in update or delete operations for contracts, but agents can work around this through amendments or status changes, and the core workflows are well-supported.
Available Tools
6 toolsambr_agent_handshakeAIdempotentInspect
Initiate a handshake on a contract on behalf of your delegating principal.
Requires an API key with an active delegation (principal wallet registered via /api/v1/delegations). Records the agent's intent to accept, reject, or request changes on the contract. The principal must separately approve via wallet signature on the Reader Portal.
Args:
contract_id (string, required): Contract ID (amb-YYYY-NNNN), SHA-256 hash, or UUID
intent (string, required): "accept" | "reject" | "request_changes"
message (string, optional): Note for the counterparty
visibility_preference (string, optional): "private" | "metadata_only" | "public" | "encrypted"
Returns: Handshake status and next steps for principal approval.
Legibility: the handshake itself is auditable — delegation scope, agent identity, and principal approval are recorded alongside the contract hash.
| Name | Required | Description | Default |
|---|---|---|---|
| intent | Yes | Handshake intent | |
| message | No | Optional note for counterparty | |
| contract_id | Yes | Contract ID, SHA-256 hash, or UUID | |
| visibility_preference | No | Optional visibility preference for negotiation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate non-readOnly, non-destructive, and idempotent, which the description aligns with by describing a handshake initiation (non-read) that is auditable and requires approval. The description adds valuable context beyond annotations: it clarifies that the action records intent, requires principal approval, and is auditable with delegation scope and identity. However, it doesn't detail rate limits or error behaviors, keeping it from a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, requirements, args, returns, legibility) and front-loaded key information. It avoids redundancy, but the Args section slightly repeats schema details, and the legibility note could be more integrated. Overall, it's efficient with most sentences earning their place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (involves delegation, approval, and auditing) and lack of output schema, the description is mostly complete. It covers purpose, prerequisites, parameters, returns, and legibility. However, it doesn't fully explain the return values ('Handshake status and next steps') or potential errors, leaving minor gaps for an agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal semantic value beyond the schema—it briefly mentions parameters in the Args section but doesn't provide additional context like examples or edge cases. This meets the baseline of 3 since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Initiate a handshake on a contract') and distinguishes it from siblings like ambr_create_contract (creates contracts) or ambr_get_contract (reads contracts). It specifies acting 'on behalf of your delegating principal' and recording 'intent to accept, reject, or request changes,' making the purpose unambiguous and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Requires an API key with an active delegation' and 'The principal must separately approve via wallet signature.' It also implies when not to use it by distinguishing from siblings—e.g., use ambr_get_contract for reading, not this for handshake initiation. This provides clear context and prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ambr_create_contractAInspect
Generate a Ricardian Contract from a template.
Creates a dual-format contract (human-readable legal text + machine-parsable JSON) using AI, linked by SHA-256 hash. The contract is stored on Ambr and accessible via the Reader Portal.
Requires a valid API key (X-API-Key header on the HTTP request) with available credits. Use ambr_list_templates first to discover templates and their required parameters.
Args:
template (string, required): Template slug (e.g. "c1-agent-delegation")
parameters (object, required): Template-specific parameters matching the schema
principal_declaration (object, required): { agent_id, principal_name, principal_type }
parent_contract_hash (string, optional): SHA-256 hash of parent contract for amendments
amendment_type (string, optional): "original" | "amendment" | "extension"
Returns:
contract_id: Unique ID (e.g. "amb-2026-0042")
sha256_hash: SHA-256 hash for verification
status: Contract status
reader_url: URL to view in Reader Portal
credits_remaining: Remaining API credits
Legibility: Output is dual-format by construction and replayable to the original SHA-256 hash — the basis of Ambr's legibility guarantee.
| Name | Required | Description | Default |
|---|---|---|---|
| template | Yes | Template slug from ambr_list_templates | |
| parameters | Yes | Template-specific parameters | |
| amendment_type | No | ||
| parent_contract_hash | No | SHA-256 hash of parent contract (for amendments) | |
| principal_declaration | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains the dual-format nature (human-readable + JSON), storage location (Ambr/Reader Portal), authentication requirements (API key with credits), and the legibility guarantee. While annotations cover basic safety (non-destructive, non-idempotent), the description provides richer operational context without contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections: purpose statement, behavioral context, prerequisites, usage guidance, parameter explanations, return values, and legibility guarantee. Every sentence adds value with zero wasted words, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex contract-creation tool with 5 parameters, nested objects, and no output schema, the description provides comprehensive context: it explains the tool's purpose, behavioral characteristics, authentication requirements, parameter semantics, return values, and sibling tool relationships. The legibility guarantee section adds important architectural context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 60% schema description coverage, the description compensates well by explaining the purpose of key parameters: template slug comes from ambr_list_templates, parameters are template-specific, and principal_declaration structure is clarified. It also explains optional parameters' purposes (parent_contract_hash for amendments, amendment_type options), adding meaningful context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Generate a Ricardian Contract from a template') and distinguishes it from siblings by mentioning dual-format output and storage on Ambr. It explicitly names a sibling tool (ambr_list_templates) for discovering templates, showing clear differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use ambr_list_templates first to discover templates and their required parameters.' It also mentions prerequisites (API key with credits) and distinguishes from sibling tools by specifying its unique contract-creation function.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ambr_get_contractARead-onlyIdempotentInspect
Retrieve a contract by ID, SHA-256 hash, or UUID.
With a valid API key (contract creator): returns the full contract including human-readable text, machine-readable JSON, status, and principal declaration. Without authentication: returns metadata only (contract_id, status, hash, dates).
Supports three lookup formats:
Contract ID: "amb-2026-0042"
SHA-256 hash: 64-character hex string
UUID: Standard UUID format
Args:
id (string, required): Contract ID, SHA-256 hash, or UUID
Returns: Full contract (if authorized) or metadata-only response.
Legibility: retrieval preserves the dual-format pairing — prose and JSON always replay to the same SHA-256.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Contract ID (amb-YYYY-NNNN), SHA-256 hash, or UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior, but the description adds valuable context beyond that: it explains the authentication-dependent response (full contract vs. metadata-only), mentions the dual-format pairing (prose and JSON), and notes that retrieval preserves hash consistency. This enhances understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. Each sentence adds value: authentication behavior, lookup formats, parameter explanation, return details, and legibility note. There's no wasted text, and it efficiently conveys necessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (authentication-dependent responses, multiple lookup formats) and lack of output schema, the description does a good job explaining what to expect. It covers return types (full contract vs. metadata) and behavioral notes like hash preservation. A slight gap is the absence of explicit error handling or rate limit mentions, but it's largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already fully documents the 'id' parameter. The description repeats the same information about ID formats without adding new syntax or format details beyond what's in the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve a contract') and distinguishes it from siblings like ambr_get_contract_status (which only gets status) and ambr_create_contract (which creates rather than retrieves). It explicitly lists the three lookup formats, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use it (to retrieve contracts by ID, hash, or UUID) and implies alternatives by mentioning authentication-dependent behavior. However, it doesn't explicitly state when to use ambr_get_contract_status instead, which would be helpful for sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ambr_get_contract_statusARead-onlyIdempotentInspect
Check the status of a contract and its amendment chain.
Returns the current status and any linked amendments (parent or child contracts). Useful for verifying if a contract is active, amended, or terminated.
Args:
id (string, required): Contract ID, SHA-256 hash, or UUID
Returns:
contract_id, status, created_at
amendment_type, parent_contract_hash
amendments: Array of child contracts (if any)
Legibility: amendments are bilateral and themselves dual-format — the chain stays legible from original through every revision.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Contract ID (amb-YYYY-NNNN), SHA-256 hash, or UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), but the description adds valuable context beyond this: it explains that amendments are 'bilateral and themselves dual-format' and that 'the chain stays legible from original through every revision,' which clarifies how amendment data is structured and maintained. This enhances understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized, with a clear purpose statement upfront, followed by usage context, parameter details, return values, and additional legibility notes. Most sentences earn their place by adding useful information, though the 'Legibility' sentence could be slightly more concise. Overall, it is efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema), the description is fairly complete: it covers purpose, usage, parameters, return values, and amendment behavior. However, it lacks details on error handling or specific status values (e.g., what 'active' means), which could be helpful. With annotations providing safety info, it is mostly adequate but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema fully documents the single parameter 'id' with its description. The description repeats the parameter info in the 'Args' section but does not add significant meaning beyond what the schema provides, such as examples or edge cases. With high schema coverage, the baseline score of 3 is appropriate as the description adds minimal extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('check the status', 'returns') and resources ('contract and its amendment chain'), distinguishing it from siblings like ambr_get_contract (which likely retrieves full details) and ambr_verify_hash (which focuses on hash verification). It explicitly mentions what it returns, making the purpose distinct and well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('useful for verifying if a contract is active, amended, or terminated'), but it does not explicitly state when not to use it or name alternatives among siblings. While it implies differentiation from other tools by focusing on status and amendments, it lacks explicit exclusions or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ambr_list_templatesARead-onlyIdempotentInspect
List available contract templates on Ambr.
Returns all active Ricardian Contract templates with their slugs, names, descriptions, categories, parameter schemas, and pricing. Use this to discover which templates are available before creating a contract with ambr_create_contract.
No authentication required.
Returns: Array of template objects with slug, name, description, category, parameter_schema, price_cents, and version fields.
Legibility: templates are the parameter schema for the dual-format contracts you create — starting here keeps your request conformant and your output defensible.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key traits (read-only, non-destructive, idempotent, closed-world), but the description adds valuable context beyond that: it states 'No authentication required' (auth needs) and explains the return format in detail. It also provides conceptual context about templates being 'parameter schema for dual-format contracts' and benefits like 'keeps your request conformant'. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by usage guidance, authentication info, return details, and conceptual notes. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema), the description is complete: it covers purpose, usage, behavioral traits (including auth), return format, and conceptual rationale. With annotations providing structured safety info, the description fills all necessary gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is high. The description compensates by explaining the absence of parameters implicitly (it's a simple list operation) and adds semantic context about what the tool does without inputs, which is appropriate for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('available contract templates on Ambr'), specifying it returns 'all active Ricardian Contract templates' with detailed fields. It explicitly distinguishes from sibling tools by mentioning 'before creating a contract with ambr_create_contract', making the purpose specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use this to discover which templates are available before creating a contract with ambr_create_contract.' This directly names the alternative sibling tool and gives a clear context for usage, with no misleading or missing exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ambr_verify_hashARead-onlyIdempotentInspect
Verify a contract's SHA-256 hash to confirm document integrity.
Checks whether the provided hash matches a contract stored on Ambr. Returns verification status, contract metadata, and Reader Portal URL if found.
Args:
hash (string, required): SHA-256 hash (64-character hex string)
Returns:
verified: boolean
contract_id: string (if found)
status: string (if found)
reader_url: string (if found)
Legibility: verification is the point at which legibility becomes provable — matching hash means the prose a human reads and the JSON a machine parses are the same document that was originally signed.
| Name | Required | Description | Default |
|---|---|---|---|
| hash | Yes | SHA-256 hash to verify (64-character hex) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable context about what verification means ('matching hash means the prose a human reads and the JSON a machine parses are the same document that was originally signed') and explains the significance of the operation beyond the basic safety profile indicated by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, args, returns, conceptual note) and appropriately sized. The 'Legibility' paragraph adds conceptual value but could be considered slightly extraneous for pure tool selection. Most sentences earn their place in explaining the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter verification tool with comprehensive annotations and no output schema, the description provides good completeness. It explains what verification accomplishes, documents the return structure, and adds conceptual context. The main gap is the lack of an output schema, but the description compensates by documenting return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the schema already documenting the single required hash parameter as 'SHA-256 hash to verify (64-character hex)'. The description repeats this information in the Args section but doesn't add meaningful semantic context beyond what the schema provides. Baseline 3 is appropriate when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('verify a contract's SHA-256 hash') and resource ('contract stored on Ambr'), distinguishing it from siblings like create_contract or get_contract. It explicitly mentions document integrity verification, which is a distinct purpose from other contract operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('to confirm document integrity'), but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools. The context is sufficient for understanding the primary use case without explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!