Skip to main content
Glama

Server Details

Fill standard legal agreement templates (NDAs, SAFEs, NVCA docs, employment) as DOCX files.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
open-agreements/open-agreements
GitHub Stars
28
Server Listing
open-agreements

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
check_signature_statusA
Read-only
Inspect

Check the status of a signing envelope. When status is "completed", includes a download URL for the signed PDF.

ParametersJSON Schema
NameRequiredDescriptionDefault
envelope_idYesThe envelope ID returned by send_for_signature.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, so safety is covered. The description adds crucial behavioral context that the output includes a download URL specifically when status is 'completed'—this conditional output behavior is not inferable from annotations or schema. Does not mention error states or rate limits, but covers the primary behavioral quirk.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes core purpose immediately; second sentence adds critical output information (download URL condition). No filler or redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description partially compensates by revealing key output fields (status and conditional download URL). Adequate for a simple 2-parameter read-only tool. Could improve by enumerating possible status values or describing error scenarios, but sufficient for agent invocation decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50% (envelope_id lacks description, api_key has description). The description fails to compensate for the undocumented envelope_id parameter—it never mentions what the envelope_id represents or how to obtain it. References the download behavior which loosely relates to api_key's purpose, but doesn't explicitly map parameters to functionality.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Check') and resource ('status of a signing envelope'). Clearly distinguishes from sibling tools like send_for_signature (creation) and fill_template (preparation) by focusing on status retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies polling usage via 'Check the status' but lacks explicit when-to-use guidance (e.g., 'call this after send_for_signature to track progress') or when-not-to-use alternatives. The second sentence hints at workflow (checking for 'completed' status) but doesn't fully contextualize the tool in the signature lifecycle.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fill_templateAInspect

Fill a legal agreement template with field values and return a document via URL or MCP resource preview metadata. For recipe templates (e.g. NVCA), also generates a redline (track-changes) document comparing the filled output against the standard form.

ParametersJSON Schema
NameRequiredDescriptionDefault
valuesNoField values to fill in the template.
templateYesTemplate ID, e.g. "common-paper-mutual-nda".
return_modeNoArtifact return mode. Defaults to "url".
redline_baseNoBase document for redline comparison. "source" = raw standard form (default), "clean" = cleaned intermediate.
include_redlineNoGenerate a redline (track-changes) document. Defaults to true for recipes.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish readOnlyHint=false (creates content) and destructiveHint=false. Description adds valuable behavioral context: dual return modes (URL vs MCP resource), and specific redline generation behavior for recipes comparing against 'standard form'—context not present in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence front-loads core purpose and return mechanism; second efficiently covers the conditional redline behavior for recipe templates. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a document-generation tool: explains output artifacts (filled document + conditional redline) despite missing output schema, clarifies enum behavior (recipe templates trigger redlines), and addresses nested object purpose ('field values').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), description adds semantic value: explains return_mode practically ('via URL or MCP resource'), and clarifies redline_base purpose ('comparing the filled output against the standard form'), enriching the raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Fill' + resource 'legal agreement template' clearly identifies the operation. The description distinguishes from siblings like get_template/list_templates by emphasizing document generation and field population, versus mere metadata retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context through mention of 'recipe templates (e.g. NVCA)' indicating when redline generation occurs. However, lacks explicit guidance contrasting with sibling tools (e.g., 'use this to generate documents vs get_template to retrieve metadata') or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_templateA
Read-only
Inspect

Fetch a single template definition with full field metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
template_idYesTemplate ID, e.g. "common-paper-mutual-nda".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description adds valuable behavioral context that it returns 'full field metadata' (indicating richness of response) without contradicting safety hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with zero waste. Front-loaded action verb with precise qualifiers ('single', 'full field metadata') that convey scope and response content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple read-only retrieval tool with one required parameter. Mentions field metadata to compensate for missing output schema. No critical gaps given tool complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with example value; description does not add parameter-specific semantics but meets baseline expectation when schema documentation is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Fetch', resource 'template definition', scope 'single' distinguishes from sibling list_templates/search_templates, and 'definition' distinguishes from fill_template which likely populates data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through 'single' (suggests use when you have a specific ID) but lacks explicit when-to-use guidance contrasting with list_templates or search_templates siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_templatesA
Read-only
Inspect

List all available legal agreement templates. Supports compact and full metadata modes for browsing. For finding templates by topic, jurisdiction, or source, use search_templates instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoResponse detail mode. Defaults to "full".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already confirm read-only safety (readOnlyHint: true). The description adds scope context ('all available') and use-case hint ('for browsing'), but does not disclose pagination behavior, rate limits, or what specific data fields differ between compact and full modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences: purpose declaration, feature detail, and alternative routing. Every sentence earns its place. Information is front-loaded with the core action, followed by usage modes and sibling distinction.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter browsing tool with good safety annotations, the description adequately covers invocation requirements. Minor gap: it doesn't explain what data constitutes 'compact' vs 'full' given the absence of an output schema, though the parameter naming offers reasonable inference.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'compact and full metadata modes' which aligns with the schema, adding only minor context ('metadata', 'for browsing') that doesn't substantially expand on the schema's 'Response detail mode' definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a clear specific verb ('List') and resource ('legal agreement templates'), explicitly stating the scope ('all available'). It effectively distinguishes from the sibling tool search_templates by contrasting browsing (this tool) with finding by filters (the sibling).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-not-to-use guidance: 'For finding templates by topic, jurisdiction, or source, use search_templates instead.' This clearly delineates the browsing/search boundary and names the correct alternative for filtered queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_templatesA
Read-only
Inspect

Search for legal agreement templates by keyword. Uses BM25 ranking to find the most relevant templates matching your query. Searches across template names, descriptions, categories, sources, and field definitions. Use this instead of list_templates when you know what kind of agreement you need.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query. Examples: "NDA", "employment offer letter", "NVCA stock purchase", "data processing GDPR", "non-compete Wyoming".
sourceNoOptional source filter (exact, case-insensitive). Values: "Common Paper", "Bonterms", "Y Combinator", "NVCA", "OpenAgreements".
categoryNoOptional category filter (exact, case-insensitive). Values: confidentiality, employment, sales-licensing, data-compliance, deals-partnerships, professional-services, venture-financing, other.
max_resultsNoMaximum results to return (1-50, default 10).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true. Description adds valuable behavioral context: BM25 ranking algorithm and specific fields searched across. Does not mention return structure or rate limits, but safety profile is covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. Front-loaded with core purpose, followed by mechanism (BM25), search scope, and sibling differentiation. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, readOnly annotations, and the tool's straightforward search purpose, the description is functionally complete. Minor gap: no hint at return value structure, though this is partially mitigated by the tool name and sibling get_template.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions and examples for all 4 parameters. Description mentions 'by keyword' (aligning with query param) and implies filtering via the search scope, but does not add semantic detail beyond what the comprehensive schema already provides. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (search), resource (legal agreement templates), ranking method (BM25), and exact search scope (names, descriptions, categories, sources, field definitions). Clearly distinguishes from list_templates sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool vs the sibling alternative: 'Use this instead of list_templates when you know what kind of agreement you need.' Provides clear selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_for_signatureAInspect

Upload a DOCX file and create a draft signing envelope via DocuSign. Returns a review URL — the user must review and send from DocuSign. Never auto-sends. Authentication is handled automatically via OAuth — no API key needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
signersYesSigners and CC recipients. First signer maps to party_1, second to party_2.
download_urlNoURL to download the DOCX file. Use the download_url from fill_template.
document_nameNoFilename for the document (e.g. 'Bonterms Mutual NDA.docx'). Auto-detected from download URL if not provided.
email_subjectNoSubject line for the signing invitation email.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the annotations (readOnlyHint: false, destructiveHint: false), the description adds critical behavioral context: it clarifies the tool creates a 'draft' envelope rather than finalizing it, discloses that it returns a 'review URL,' and explicitly states the manual review requirement. This provides essential safety and workflow information not captured in the boolean annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three high-value sentences with zero waste: the first establishes purpose, the second explains the return value and workflow, and the third states a critical constraint. Information is front-loaded appropriately with the primary action stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description compensates effectively by specifying the return value ('Returns a review URL'). It explains the complete lifecycle of the operation (draft creation → manual review → no auto-send) and clarifies the external dependency on the signing provider UI, providing sufficient context for a 5-parameter tool with complex external interactions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline score is 3. The description mentions 'DOCX file' which aligns with the file_path and download_url parameters, but this merely echoes the schema descriptions ('Local path to the DOCX file,' 'URL to download the DOCX'). It adds no additional semantic detail about the signers array structure or the mutually exclusive nature of the file source parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific action ('Upload a DOCX file and create a draft signing envelope') that clearly identifies the resource and operation. It distinguishes itself from the sibling tool check_signature_status by emphasizing the creation of a draft envelope rather than querying status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear workflow guidance stating the user must review and send from the signing provider UI, and explicitly warns 'Never auto-sends.' However, it does not explicitly reference sibling tools like fill_template (which generates the DOCX input) or check_signature_status (for later monitoring) to guide selection between alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.