Skip to main content
Glama
Ownership verified

Server Details

ClearPolicy is a document signing and compliance tracking tool for organizations. Once connected, your AI assistant can import documents, send signature requests, track who has and hasn't signed, and manage your contacts — all by prompt.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose with clear boundaries: canceling requests, creating/listing/getting people and documents, managing signing requests, and sending reminders. No overlap exists; for example, 'list-documents-tool' is for browsing, while 'get-document-tool' provides detailed status, and 'send-signing-request-tool' initiates the process.

Naming Consistency5/5

All tools follow a consistent verb-noun pattern with hyphens (e.g., 'cancel-signing-request-tool', 'list-documents-tool'), making them predictable and readable. There are no deviations in style or convention across the set.

Tool Count5/5

With 10 tools, the server is well-scoped for document signing and acknowledgment workflows, covering key operations like creation, listing, sending, tracking, and reminders. Each tool earns its place without feeling excessive or insufficient for the domain.

Completeness5/5

The tool set provides complete CRUD/lifecycle coverage for the domain: creating and listing people/documents, sending and canceling requests, tracking status, and sending reminders. No obvious gaps exist; agents can manage the entire workflow from start to finish without dead ends.

Available Tools

10 tools
cancel-signing-request-toolCancel Signing Request ToolA
Destructive
Inspect

Cancel a pending signature or acknowledgment request. This cannot be undone.

ParametersJSON Schema
NameRequiredDescriptionDefault
request_idYesThe ULID of the signing request to cancel.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare destructiveHint=true, the description adds critical behavioral context by explicitly stating 'This cannot be undone,' reinforcing the permanent nature of the action. It also clarifies state sensitivity by specifying this only works on 'pending' requests.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The two-sentence structure is optimally front-loaded: the first sentence defines the core action, and the second sentence delivers essential irreversibility warnings. Zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter destructive operation without output schema, the description adequately covers the action scope, irreversibility warning, and state constraint (pending). It could be improved by noting error cases (e.g., already-completed requests), but is complete enough for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents the request_id parameter including its ULID format. The description does not add parameter-specific semantics, which is acceptable given the complete schema baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the specific action (Cancel) and target resource (pending signature or acknowledgment request), clearly distinguishing this from sibling tools like send-signing-request-tool, get-signing-request-tool, and list-signing-requests-tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context through the 'pending' qualifier (indicating it only works on pending requests) and the irreversibility warning ('cannot be undone'). However, it does not explicitly mention alternatives like send-reminder-tool for non-destructive follow-ups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create-person-toolCreate Person ToolA
Destructive
Inspect

Add someone new to the organization so they can be sent documents for signature or acknowledgment.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesFull name of the person.
emailYesEmail address of the person. Must be unique within the organization.
phoneNoPhone number in international format (e.g. +12125551234).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructiveHint=true (creation side-effects). The description adds value by specifying the organizational scope and downstream purpose (enabling document sending), but omits behavioral details like idempotency, error handling for duplicate emails, or confirmation that this creates a persistent record.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, efficiently structured sentence with zero waste. Front-loaded with the action ('Add someone new'), followed by scope ('to the organization'), and purpose clause ('so they can be sent documents...'), earning its place with every clause.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple 3-parameter creation tool with no output schema. Covers the essential 'why' (document signing workflow) and scope (organization). Could be improved by mentioning the relationship to send-signing-request-tool explicitly or duplicate handling behavior, but sufficient for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (all 3 parameters fully documented). The description meets the baseline by implying the required 'name' and 'email' through 'someone new,' but does not add syntax details or parameter interdependencies beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Add' with resource 'someone new to the organization' and distinguishes from siblings (e.g., get-person-tool, list-people-tool) by emphasizing creation of new records rather than retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear workflow context ('so they can be sent documents for signature') explaining when to use this tool in the document signing process. However, it does not explicitly state exclusions (e.g., 'do not use if person already exists') or name specific alternatives like get-person-tool for checking existing records.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-document-toolGet Document ToolB
Read-only
Inspect

Get details of a document, including how many people have signed or acknowledged it and how many haven't.

ParametersJSON Schema
NameRequiredDescriptionDefault
document_idYesThe ULID of the document.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The 'readOnlyHint: true' annotation already indicates this is a safe read operation, which aligns with the description's 'Get' verb. The description adds value by disclosing what specific data is returned (signature/acknowledgment counts), compensating somewhat for the missing output schema. It does not address error states or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, efficient sentence that front-loads the core action ('Get details of a document') and appends specific, valuable detail about the returned data. No words are wasted.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with one parameter and no output schema, the description adequately covers the tool's purpose and hints at the return value structure (signature counts). It could be improved by explicitly stating this retrieves a single document (contrasting with 'list-documents-tool'), but it is sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'document_id' parameter, the schema sufficiently documents the input. The description does not add parameter-specific context (e.g., where to obtain the ULID), but no additional compensation is needed given the complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('document'), and uniquely specifies the returned content includes signature/acknowledgment counts. However, it does not explicitly distinguish this single-document retrieval from the sibling 'list-documents-tool' or clarify the difference between a 'document' and a 'signing request' (another sibling).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like 'list-documents-tool' (for multiple documents) or 'get-signing-request-tool' (for request status vs. document details). There are no stated prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-person-toolGet Person ToolA
Read-only
Inspect

Get a person's profile and see which documents they have or haven't signed or acknowledged.

ParametersJSON Schema
NameRequiredDescriptionDefault
person_idYesThe ULID of the person.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description aligns with readOnlyHint: true (no contradiction) and adds valuable behavioral context not in annotations: it discloses that the tool returns document signing/acknowledgment status. This compensates for the missing output schema by describing what data the agent can expect to retrieve.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 13 words. Front-loaded with action verb. Every clause earns its place: 'profile' establishes the entity, while 'documents they have or haven't signed or acknowledged' specifies the unique data returned. Zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter lookup tool, the description is complete. It effectively compensates for the lack of output schema by describing the return value (signing status). Could improve by noting prerequisite steps (obtaining person_id) or error conditions, but sufficient for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents the person_id parameter as a ULID. The description does not mention the parameter, but none is needed given the complete schema. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') + resource ('person's profile') + distinct scope ('see which documents they have or haven't signed'). It clearly distinguishes from siblings: contrasts with create-person-tool (read vs write), list-people-tool (single entity vs list), and get-signing-request-tool (person-centric vs request-centric view).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying the unique value proposition (checking signing/acknowledgment status), which helps distinguish it from generic person lookup. However, it lacks explicit guidance on when to use list-people-tool first to obtain the person_id, or when to prefer get-signing-request-tool instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-signing-request-toolGet Signing Request ToolB
Read-only
Inspect

Check whether a specific person has signed or acknowledged a document, and when.

ParametersJSON Schema
NameRequiredDescriptionDefault
request_idYesThe ULID of the signing request.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The readOnlyHint annotation already establishes this is a safe read operation. The description adds value by specifying the exact data retrieved (signature status and timestamp). However, it omits error behavior (e.g., invalid ULID handling) and does not disclose if the returned data is real-time or cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy. It front-loads the action verb ('Check') and efficiently communicates the core value proposition (status + timestamp) without filler words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no nested objects) and read-only nature, the description adequately covers the return value intent ('whether... signed... and when') despite the absence of an output schema. It could be improved by mentioning error cases or data freshness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents the request_id parameter as 'The ULID of the signing request.' The description implies specificity ('specific person') but does not add syntax guidance, format examples, or explain how the request_id relates to the person/document mentioned.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks signing/acknowledgment status and retrieval timing ('and when'). It implies targeting a specific entity ('specific person'), distinguishing it from sibling list-signing-requests-tool. However, it doesn't explicitly mention the request_id parameter or explicitly contrast with all siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like list-signing-requests-tool or send-signing-request-tool. It states what the tool does but not the conditions or prerequisites for invocation (e.g., 'use this when you have a request_id and need current status').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list-documents-toolList Documents ToolA
Read-only
Inspect

Find documents available to send for employee signatures or policy acknowledgments. Use this to browse what documents the organization has ready to send.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoFilter documents by name (partial match, case-insensitive).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context that this lists documents specifically 'available to send' (suggesting a filtered set of ready/template documents), which supplements the readOnlyHint annotation. However, it omits details about pagination, result limits, or the structure of returned documents.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero redundancy. The first establishes purpose and scope; the second clarifies the browsing use case. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional filter parameter) and read-only nature, the description is sufficiently complete. While it does not describe the return format (absent an output schema), the 'browse' verb adequately signals a collection return type for this simple listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'name' parameter, the baseline score applies. The description does not add additional semantic context about the parameter (such as example search terms or wildcard behavior), but the schema is self-sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Find', 'browse') and identifies the resource ('documents'). It effectively distinguishes this from sibling 'get-document-tool' by implying a list operation versus single-item retrieval, though it doesn't explicitly name the alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context for when to use the tool ('available to send for employee signatures or policy acknowledgments'), implying its role in signature workflows. However, it lacks explicit guidance on when NOT to use it or direct comparisons to sibling tools like 'get-document-tool'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list-people-toolList People ToolA
Read-only
Inspect

Find people in the organization who can be sent documents for signature or acknowledgment. Search by name or email.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoFilter by name (partial match, case-insensitive).
emailNoFilter by email address (partial match).
include_archivedNoInclude archived people in the results. Defaults to false.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description is consistent with the readOnlyHint annotation (safe read operation). It adds valuable domain context that these people are potential document recipients. However, it lacks operational details such as pagination behavior, result limits, or the default exclusion of archived records (mentioned only in schema).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes purpose and domain context, second clarifies search capabilities. Front-loaded with the most critical information and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple schema (3 flat parameters, no required fields) and readOnly annotation, the description adequately covers the tool's intent. Minor gap: as a list operation without an output schema, it could mention expectations around result set size or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is appropriately met. The description confirms the search intent ('Search by name or email') aligning with the partial match filters in the schema, but does not add significant semantic depth beyond what the schema already provides for the include_archived parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Find') with clear resource ('people') and organizational context ('who can be sent documents for signature or acknowledgment'). It effectively distinguishes from sibling 'get-person-tool' (retrieval by ID) and 'create-person-tool' by implying a search/list operation across the organization.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear workflow context (use when seeking recipients for signatures/acknowledgments) and implies search functionality ('Search by name or email'). However, it does not explicitly name alternatives like 'get-person-tool' for ID-based lookups or state when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list-signing-requests-toolList Signing Requests ToolA
Read-only
Inspect

Check who has signed a document and who hasn't. Filter by document, person, or status (pending, signed, expired, etc.) to track compliance.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoFilter by status: created, sent, viewed, attested, expired, or canceled.
person_idNoFilter by person ULID.
document_idNoFilter by document ULID.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The readOnlyHint annotation confirms safe read access, and the description adds the business context of compliance tracking. However, the description mentions status values 'pending, signed' which don't exactly match the schema enum ('created, sent, viewed, attested'), creating minor ambiguity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently pack the value proposition (compliance checking) and functional capabilities (filtering). Every word earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 3-parameter list tool with read-only annotations, the description adequately covers intent and filtering. It appropriately omits return value details (no output schema exists to describe), though mentioning that it returns a list would be a slight improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description reinforces that parameters act as filters but doesn't add semantic details beyond the schema (e.g., explaining that 'person_id' is a ULID, though the schema covers this).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks signature status ('Check who has signed') and supports compliance tracking. However, it doesn't explicitly distinguish this list operation from the sibling 'get-signing-request-tool' (singular), though the mention of filtering implies bulk retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides context for when to use the tool ('track compliance') and mentions applicable filters (document, person, status). However, it lacks explicit guidance on when to use this versus the singular 'get-signing-request-tool' or other siblings like 'send-signing-request-tool'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send-reminder-toolSend Reminder ToolA
Destructive
Inspect

Nudge someone who hasn't signed or acknowledged a document yet by sending them a reminder email.

ParametersJSON Schema
NameRequiredDescriptionDefault
request_idYesThe ULID of the signing request to send a reminder for.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true, and the description adds that the mechanism is specifically an 'email' (communication channel) and clarifies the trigger condition (unsigned documents). It does not specify rate limits, retry safety, or exact state changes beyond the email dispatch.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with zero waste. Front-loaded with the action ('Nudge'), immediately qualified with the condition ('who hasn't signed...'), and closes with the mechanism ('reminder email'). Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (1 parameter, 100% schema coverage, no output schema) and presence of destructive annotations, the description adequately covers the tool's purpose, mechanism, and triggering condition. Minor gap: doesn't specify failure behavior if document already signed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description provides semantic context that the request_id relates to unsigned documents, but does not add parameter-specific syntax or format details beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Nudge'/'sending a reminder email'), the target ('someone who hasn't signed or acknowledged a document yet'), and distinguishes from siblings by specifying this is a reminder for existing unsigned requests versus creating new requests (send-signing-request-tool).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying the prerequisite state ('hasn't signed or acknowledged'), indicating when to use it. However, it lacks explicit guidance contrasting with send-signing-request-tool or stating when NOT to use it (e.g., if already signed).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send-signing-request-toolSend Signing Request ToolA
Destructive
Inspect

Send a document to one or more people to sign or acknowledge. Use this when a user wants their team or contacts to sign a policy, agreement, or compliance document.

ParametersJSON Schema
NameRequiredDescriptionDefault
person_idsYesArray of person ULIDs to send the request to.
document_idYesThe ULID of the document to send.
attestation_typeNoOverride the attestation type: "acknowledgment" or "signature". Defaults to the document's default.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructiveHint=true, indicating a write operation. The description confirms the action ('Send') but does not elaborate on behavioral side effects beyond what the annotation provides (e.g., whether recipients are immediately notified, if the operation is irreversible, or what 'destructive' specifically entails in this context).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero waste: first defines the action and scope, second provides usage context. Front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 3-parameter tool with 100% schema coverage and no output schema. Could be strengthened by mentioning that this creates a signing request entity (given the sibling lifecycle tools: cancel, get, list) but covers the core interaction model sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds semantic value by contextualizing 'person_ids' as 'team or contacts' and 'document_id' as 'policy, agreement, or compliance document', providing business context beyond the technical ULID references in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Send') and resources ('document') and explicitly covers both supported actions ('sign or acknowledge'), clearly distinguishing this from generic messaging or document sharing tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit positive guidance ('Use this when a user wants their team or contacts to sign...') with concrete examples (policy, agreement, compliance). Lacks explicit differentiation from siblings like 'send-reminder-tool' or when NOT to use this vs alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources