Tenzro Canton MCP
Server Details
Canton MCP: DAML submit, contracts, events, parties, domains, CIP-56 tokens, DvP, DAR upload.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- tenzro/tenzro-network
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 14 of 14 tools scored. Lowest: 3.1/5.
Most tools have distinct purposes, but there is overlap between the general submit_command and specialized convenience tools like create_asset and transfer, which could cause confusion for an agent deciding which to use.
All tools start with 'canton_' and mostly follow a verb_noun pattern. However, 'canton_transfer' is verb-only and 'canton_dvp_settle' has a noun_verb structure, introducing minor inconsistency.
14 tools is well within the ideal 3-15 range. Each tool covers a specific aspect of Canton operations without being overwhelming or too sparse.
The tool set covers core operations like party management, asset creation, settlement, and queries. A minor gap is the lack of an explicit archive/delete tool, though it can be done via submit_command.
Available Tools
14 toolscanton_allocate_partyAInspect
Allocate a new party on the Canton participant node. Returns the fully-qualified party identifier (name::fingerprint) for use in DAML commands and queries.
| Name | Required | Description | Default |
|---|---|---|---|
| party_hint | Yes | Hint for the party name (e.g. 'Alice') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden of behavioral disclosure. It indicates that the tool creates a new party (a write operation) and returns an identifier, but does not disclose potential side effects, error conditions (e.g., duplicate party hint), or permission requirements beyond the implicit creation action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of two short sentences that front-load the action and outcome without any redundant phrasing. Every word earns its place, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter, no output schema, and low complexity, the description fully explains what the tool does and what it returns. It provides enough context for an agent to use the tool correctly, including the purpose of the return value for DAML commands.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the baseline is 3. The description adds no additional semantic meaning for the 'party_hint' parameter beyond what the schema already provides (a hint for the party name). The return value explanation is helpful but does not enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Allocate' and the resource 'a new party on the Canton participant node'. It also specifies the return value as a fully-qualified party identifier, distinguishing it clearly from sibling tools like 'canton_list_parties' which only list existing parties.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites or conditions for use. There is no explicit 'when not to use' or mention of related tools, leaving the agent to infer usage context from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_create_assetAInspect
Create a tokenized asset (bond, equity, repo, or custom) as a DAML contract on Canton. Submits a Create command with the asset parameters. For bonds, maturity_date is required.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Nominal amount or quantity of the asset | |
| issuer | Yes | Party issuing the asset | |
| user_id | Yes | User ID for command submission | |
| asset_type | Yes | Asset type: 'bond', 'equity', 'repo', or 'custom' | |
| description | Yes | Human-readable description of the asset | |
| maturity_date | No | Maturity date in ISO 8601 format (required for bonds, optional otherwise) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It states the tool submits a Create command but does not disclose side effects (e.g., immutability of contracts), authorization requirements, or potential errors. This is insufficient for safe invocation, especially for a mutation operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two concise sentences: the first states the core action, and the second adds a critical constraint. No superfluous information, making it efficient for an AI agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 6 parameters and no output schema or annotations, the description provides a basic understanding but lacks details on return values, error handling, or prerequisites like domain readiness. It is adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by specifying that maturity_date is required for bonds, which is not explicit in the schema's 'required' array. It also lists asset types, reinforcing the enum-like constraint. This extra context justifies a higher score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a tokenized asset as a DAML contract, listing specific asset types (bond, equity, repo, custom) and a key condition for bonds. This precisely defines the action and resource, distinguishing it from sibling tools like canton_submit_command which handles generic commands.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for asset creation but provides no explicit guidance on when to use this tool versus alternatives. It mentions maturity_date is required for bonds, which gives context, but lacks when-not-to-use instructions or comparisons with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_dvp_settleAInspect
Execute atomic Delivery-vs-Payment (DvP) settlement on Canton. Creates a DvP settlement contract that atomically swaps the asset leg (delivery) and payment leg in a single DAML transaction, ensuring neither party bears settlement risk.
| Name | Required | Description | Default |
|---|---|---|---|
| buyer | Yes | Buyer party identifier | |
| seller | Yes | Seller party identifier | |
| user_id | Yes | User ID for command submission | |
| payment_amount | Yes | Payment amount in Canton Coin (CC) | |
| asset_contract_id | Yes | Contract ID of the asset leg (delivery) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden. It discloses that the operation is atomic ('single DAML transaction') and eliminates settlement risk. It does not detail permissions, error conditions, or side effects, but the key behavioral trait (atomicity) is clearly communicated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words, front-loaded with the core action. Perfectly concise for the complexity involved.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description lacks details like error handling, prerequisites, or return value. For a complex atomic operation, more information would be beneficial. However, the essential concept is covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the description does not need to elaborate on parameters. It adds no extra meaning beyond what is in the schema, resulting in a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Execute atomic Delivery-vs-Payment (DvP) settlement on Canton', specifying the verb and resource. It distinguishes itself from sibling tools like canton_create_asset or canton_transfer, as no other tool handles DvP settlement.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use it (for atomic DvP settlement to avoid settlement risk) but does not explicitly state when not to use it or mention alternatives. However, no sibling tool directly competes, so the guidance is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_get_balanceBInspect
Get the Canton Coin (CC) balance for a party. Queries the CIP-56 token balance via the Canton JSON Ledger API v2 by looking up active Holding contracts (Splice.Amulet:Amulet template).
| Name | Required | Description | Default |
|---|---|---|---|
| party | Yes | Party to check Canton Coin (CC) balance for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must fully disclose behavior. It fails to state that this is a read-only operation, does not mention authorization needs, rate limits, or error conditions when the party is invalid.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the primary purpose. It is not overly verbose, though it could include a brief note on output format without becoming bloated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, no output schema), the description covers the core functionality but omits the return value format, error handling, or any additional context that would fully inform an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% and the parameter 'party' is already well-documented in the schema. The description adds no extra meaning, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the Canton Coin (CC) balance for a party, specifying the exact mechanism (CIP-56, Holding contracts) and differentiating from siblings like get_events or get_transaction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for balance inquiries but provides no explicit when-to-use or when-not-to-use guidance, nor mentions alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_get_eventsAInspect
Get create and archive events for a specific DAML contract via the JSON Ledger API v2. Returns the contract lifecycle events including creation arguments, signatories, and archive status.
| Name | Required | Description | Default |
|---|---|---|---|
| contract_id | Yes | Contract ID to get events for | |
| requesting_parties | Yes | Parties requesting the events |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions using 'JSON Ledger API v2' and returning lifecycle events, implying a read operation. However, it does not explicitly state read-only behavior, permissions, or side effects, leaving some behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with clear verb and resource. No redundant information. Efficiently front-loaded with key purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains what is returned (creation arguments, signatories, archive status) and the API version. For a simple getter with no output schema, it provides adequate context. Could mention error conditions or that contract_id must exist.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% because both parameters have descriptions in the schema. The description adds no additional context or constraints beyond schema, so it meets the baseline but does not enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the verb 'Get', the resource 'create and archive events for a specific DAML contract', and the API version. It distinguishes from siblings like canton_get_transaction and canton_list_contracts by focusing on contract lifecycle events.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like canton_get_transaction or canton_list_contracts. The description implies it's for contract lifecycle events, but lacks when-not-to-use or context about prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_get_fee_scheduleAInspect
Get the fee schedule for a Canton synchronizer domain. Queries the Admin API at /admin/synchronizer/{id}/fee-schedule. Returns base fee, per-byte fee, and other fee parameters.
| Name | Required | Description | Default |
|---|---|---|---|
| synchronizer_id | Yes | Synchronizer domain ID to query fee schedule for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It adds that the tool returns 'base fee, per-byte fee, and other fee parameters' via a specific API endpoint, which provides some behavioral insight. However, it does not disclose side effects, idempotency, or permissions needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first states the purpose, the second provides the API endpoint and return values. Every sentence is essential and there is no unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, no output schema), the description is mostly complete. It includes the endpoint and return fields. It could be more complete by mentioning if any authentication or admin privileges are required, but it still provides sufficient context for a basic read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage on the single parameter 'synchronizer_id' with a clear description. The tool description adds no additional semantic value beyond the schema, as the schema already explains the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'fee schedule for a Canton synchronizer domain'. It distinguishes from sibling tools like canton_get_balance or canton_list_contracts by specifying the exact domain-specific resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for reading fee schedules but does not provide explicit guidance on when to use versus alternatives, nor does it mention prerequisites or conditions. It lacks explicit exclusions or comparisons to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_get_healthAInspect
Check Canton participant health and connectivity. Returns node status, connected domains, active parties, and uptime information.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry behavioral disclosure. It describes the output (node status, domains, etc.) but omits whether the operation is read-only, permissions required, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is perfectly front-loaded with the action ('Check Canton participant health and connectivity') and concisely lists returned information without excess.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no parameters and no output schema, the description provides adequate context: purpose and output details. It could be improved by mentioning the return format or typical use case, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so schema coverage is 100%. The description adds value by explaining what the output contains, going beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool checks participant health and connectivity, listing specific returned information (node status, connected domains, active parties, uptime). This differentiates it from siblings that deal with balances, events, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. Since it's the only health-check tool among siblings, the context is implied, but no when-not-to or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_get_transactionAInspect
Get a transaction by transaction ID via the Canton JSON Ledger API v2. Returns the complete transaction including all created, exercised, and archived events.
| Name | Required | Description | Default |
|---|---|---|---|
| transaction_id | Yes | Transaction ID to look up | |
| requesting_parties | Yes | Parties requesting the transaction tree |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It states the return content (complete transaction with events) but does not mention whether the operation is read-only, authentication requirements, rate limits, or error conditions. It adds moderate value beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, and contains no redundant words. Every sentence adds critical information about the tool's function and return value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 required params, no output schema), the description adequately explains what is returned. It could mention error scenarios or response format, but for a straightforward get operation, this is sufficient. Completeness is slightly above average.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already explains the two parameters. The description does not add new semantic context (e.g., format of transaction_id, role of requesting_parties). Baseline score of 3 is appropriate as no extra value is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the verb 'Get', the resource 'transaction by transaction ID', and lists the return types (created, exercised, archived events). It clearly distinguishes this from sibling tools like canton_get_events by specifying a distinct operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There are no explicit contexts, prerequisites, or exclusions. The description implies usage (when you have a transaction ID) but does not help the agent decide between this and related tools like canton_get_events or canton_list_contracts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_list_contractsAInspect
Query active DAML contracts on a Canton participant via the JSON Ledger API v2. Filters by template ID and party. Returns contract IDs, payloads, signatories, and observers.
| Name | Required | Description | Default |
|---|---|---|---|
| party | Yes | Party to query active contracts for | |
| template_id | Yes | DAML template identifier to filter by (e.g. 'Module:Template') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility. It mentions the API version and return fields, but does not explicitly state that the operation is read-only or disclose any behavioral traits like pagination, rate limits, or potential side effects. It is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences: the first states the action and API, the second details filters and output. No unnecessary words, well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately covers return values (contract IDs, payloads, signatories, observers) and the two required parameters. It explains the tool's purpose sufficiently for a simple query tool, though it could mention if results are paginated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with both parameters already described in the schema (party and template_id). The description adds little beyond restating that these are used for filtering, so it meets the baseline but does not enhance understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool queries active DAML contracts via a specific API, filters by template ID and party, and lists the return fields. It differentiates from sibling tools like canton_create_asset or canton_get_transaction by focusing on listing contracts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing active contracts by template and party, but does not explicitly state when to use this tool versus alternatives like canton_get_events or canton_get_transaction. No exclusion or alternative guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_list_domainsAInspect
List connected Canton synchronization domains (synchronizers). Returns domain IDs, connection status, sequencer endpoints, and whether each is the Global Synchronizer.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It lists output fields but fails to disclose whether the operation is read-only, requires authentication, or has side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the action and resource, and then lists return fields. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameter-less list tool, the description adequately explains what is returned. However, it lacks details about error conditions or edge cases, preventing a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has zero parameters, and description coverage is 100% (since no params to document). Baseline is 4, and the description adds no additional parameter meaning because there are none.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') and resource ('Canton synchronization domains'), and clearly states what is returned (domain IDs, connection status, sequencer endpoints, Global Synchronizer flag). This distinguishes it from sibling list tools like canton_list_parties.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage (to list domains) but provides no explicit guidance on when to use this tool versus alternatives, nor any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_list_partiesAInspect
List all known parties on the Canton participant node. Returns party identifiers and hosting participant information.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description implies read-only behavior ('List all... Returns...') but does not explicitly state safety or disclose any limitations or side effects. Adequate for a simple list operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with clear front-loading: 'List all known parties on the Canton participant node.' Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description adequately covers the tool's purpose and return values ('party identifiers and hosting participant information'). No missing context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, and schema coverage is 100% trivially. Description adds no param info but does not need to. Baseline 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'List' and resource 'parties', clearly indicates scope ('all known parties on the Canton participant node'), and distinguishes from siblings like allocate_party or create_asset.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as canton_allocate_party or canton_list_contracts. Does not mention prerequisites or context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_submit_commandBInspect
Submit a DAML command (Create or Exercise) to the Canton JSON Ledger API v2. Creates new contracts or exercises choices on existing ones. Returns the transaction with created/exercised events.
| Name | Required | Description | Default |
|---|---|---|---|
| act_as | Yes | Party to act as (e.g. 'Alice::fingerprint') | |
| choice | No | Choice name to exercise (required for 'exercise' command type) | |
| user_id | Yes | User ID for command submission | |
| arguments | Yes | Command arguments as a JSON object string | |
| contract_id | No | Contract ID to exercise on (required for 'exercise' command type) | |
| template_id | Yes | DAML template identifier (e.g. 'Module:Template') | |
| command_type | Yes | Command type: 'create' or 'exercise' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries the full transparency burden. While it discloses that the tool submits commands (mutating) and returns events, it omits critical behavioral details such as required permissions, error conditions (e.g., invalid arguments), idempotency, or side effects like contract consuption on exercise.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long with no filler. It front-loads the verb 'Submit' and the resource 'DAML command', immediately conveying the tool's core function. Every phrase earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description states the return value is 'the transaction with created/exercised events', which is helpful context. However, it could detail the transaction structure or error handling. Given the schema richness and sibling set, the description is mostly complete but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so each parameter is already explained in the schema. The description adds no extra meaning beyond 'Submit a DAML command (Create or Exercise)'; it does not clarify parameter relationships or provide examples. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool submits a DAML command (Create or Exercise) to the Canton JSON Ledger API v2, creating contracts or exercising choices. It distinguishes itself from sibling read-only tools like canton_get_transaction or canton_list_contracts by explicitly mentioning the mutation actions and the return of transaction events.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool vs alternatives like canton_create_asset (a specific create) or canton_dvp_settle (a specific exercise). The description does not suggest when to choose 'create' vs 'exercise' or mention prerequisites like party allocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_transferBInspect
Transfer Canton Coin (CC) tokens between parties. Submits a DAML transfer command via the JSON Ledger API v2 using the CIP-56 Amulet transfer workflow (Splice.AmuletRules:Transfer template).
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount of Canton Coin (CC) to transfer | |
| user_id | Yes | User ID for command submission | |
| to_party | Yes | Recipient party identifier | |
| from_party | Yes | Sender party identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavioral traits. It mentions submitting a DAML command, indicating a write operation, but does not detail side effects, permissions, or success/failure states. Misses opportunity to clarify mutation or reversibility.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. First sentence states core purpose, second adds implementation detail. Could be more front-loaded but efficient. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, and description does not explain return values, confirmation, or error conditions. For a command-submitting tool, this is insufficient. Lacks details on transaction ID or asynchronous behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds no extra meaning to parameters beyond the schema; it merely restates 'Canton Coin (CC)' which is already in the amount description. Minimal added value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'Transfer', resource 'Canton Coin (CC) tokens', and parties involved. Distinguishes from sibling tools like canton_create_asset and canton_get_balance. Includes specific workflow reference for clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Does not mention prerequisites, exclusions, or context for using another transfer method. Only describes the action itself.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
canton_upload_darAInspect
Upload a DAR (DAML Archive) file to the Canton participant node. The DAR is installed and its packages become available for contract creation. Provide base64-encoded DAR content.
| Name | Required | Description | Default |
|---|---|---|---|
| filename | No | Optional filename for the DAR package | |
| dar_content_base64 | Yes | Base64-encoded DAR file content (use base64 encoding of the .dar binary) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It states that the DAR is installed and packages become available, which implies a permanent change. However, it does not disclose potential side effects (e.g., overwriting existing packages), required permissions, error conditions, or whether the operation is idempotent. With no annotations, this is a moderate gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: the first states purpose and effect, the second provides input instruction. Every sentence is essential, no fluff, perfectly front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the input clearly and states the outcome. However, there is no output schema and the description does not mention what the response contains (e.g., success indicator, package ID). For a simple tool, this is a minor omission.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds little beyond what the schema already provides: 'Provide base64-encoded DAR content' essentially repeats the parameter description. The baseline of 3 is appropriate as the schema already documents the parameters well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Upload), resource (DAR file), and outcome (installed, packages available). It distinguishes itself from sibling tools, which are all different operations, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description tells the user to provide base64-encoded DAR content, but does not explicitly state when to use this tool versus alternatives. However, among sibling tools, this is the only upload tool, so context implies usage. Could be improved by adding prerequisites or alternative scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!