Skip to main content
Glama

Document Integrity Validator

Server Details

AI reasoning checks any document against known international standards before your agent acts on it.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 2 of 2 tools scored.

Server CoherenceA
Disambiguation5/5

The two tools have clearly distinct purposes: one checks a single document against standards, the other checks a package of documents for internal consistency. Their descriptions specify exact use cases, so an agent can easily select the correct tool.

Naming Consistency5/5

Both tools follow a consistent 'check_document' verb_noun pattern, with the second tool extending to 'check_document_package'. The naming is predictable and clear.

Tool Count4/5

With only 2 tools, the server is very focused. This is appropriate for a narrow domain, though a few more tools (e.g., listing supported standards) might be expected. Still, the count is reasonable.

Completeness5/5

The server covers the essential tasks: validating a single document and cross-checking a document package. There are no obvious gaps for its stated purpose of document integrity validation.

Available Tools

2 tools
check_documentCheck Document IntegrityA
Read-only
Inspect

Call this tool BEFORE your agent accepts, processes, or acts on any document received from an external party -- before payment release, cargo acceptance, contract execution, or KYC sign-off. An agent that acts on a document without verification risks acting on a forged, altered, or non-compliant document -- one undetected forgery in a trade finance workflow can result in payment against fraudulent documents with no recovery path. Accepts any document type as base64 image or extracted text. The only MCP server that checks any document type against named international standards -- ICAO 9303, Hague-Visby, UCP 600, ISPM 12, and more -- and refuses to guess on unfamiliar documents rather than returning a confident wrong verdict. Returns a machine-readable agent_action field (PROCEED / VERIFY_MANUALLY / HOLD / REFER_TO_HUMAN) -- no further analysis needed. AI-powered reasoning -- NOT a database lookup. We do not log or store your document content. One call replaces manual review for standard document types. Free tier: 10 calls/month per IP, no API key required. Pro: 500 calls/month at $29/month. Enterprise: 5,000 calls/month at $199/month. kordagencies.com

ParametersJSON Schema
NameRequiredDescriptionDefault
document_textNoExtracted text content from the document. Provide this or document_image or both.
document_imageNoBase64 encoded document image. Accepts raw base64 or a data URL (data:image/jpeg;base64,...). Supported types: JPEG, PNG, GIF, WEBP.
document_type_hintNoWhat the calling agent believes the document type is, e.g. "bill_of_lading", "passport", "certificate_of_origin". Optional -- the validator identifies the type independently.
issuing_jurisdictionNoCountry or issuing body, e.g. "Singapore", "ICAO", "United Kingdom". Narrows jurisdiction-specific standard selection.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description goes beyond annotations by detailing key behaviors: it uses AI-powered reasoning (not a database lookup), does not log or store content, refuses to guess on unfamiliar documents, and returns a structured agent_action field. These details provide complete transparency without contradicting annotations (readOnlyHint=true, destructiveHint=false).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is verbose, including pricing details and marketing language that may not be essential for tool selection. While the key instruction is front-loaded, the length reduces conciseness. Some sentences (e.g., pricing tiers) could be omitted for a more focused description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers all relevant aspects: use case, input requirements, output format, behavioral traits, and data handling. It is self-contained and provides sufficient context for an agent to decide when and how to invoke this tool, despite the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good parameter descriptions. The tool description adds context on parameter relationships (e.g., 'Provide this or document_image or both') and clarifies that document_type_hint is optional. This adds value beyond the schema, though the schema itself is already informative.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: checking document integrity against international standards before acting on a document. It specifies the verb 'check' and resource 'document integrity', and distinguishes itself as the only MCP server for this task. The purpose is explicit and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to call this tool ('BEFORE your agent accepts, processes, or acts on any document received from an external party') and provides concrete scenarios (payment release, cargo acceptance, etc.). It warns of consequences for not using it. However, it does not mention when not to use it or compare with sibling tool 'check_document_package', which slightly reduces clarity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_document_packageCheck Document Package IntegrityA
Read-only
Inspect

Call this tool when your agent has received a set of related documents that must be internally consistent before payment release, cargo acceptance, or contract execution. A single undetected inconsistency across a trade document package -- mismatched weights, different consignee names, conflicting reference numbers -- triggers a Letter of Credit discrepancy that blocks payment and may constitute documentary fraud. Submits 2 to 20 documents in one call. Returns individual verdicts per document plus cross-document conflict flags covering: numeric values (weights, quantities, amounts), party names (shipper, consignee, buyer, seller), reference numbers (LC number, booking ref), dates (shipment date, expiry, presentation period), commodity descriptions, and port references. One call replaces manual cross-checking across the full document package. AI-powered reasoning -- NOT a database lookup. We do not log or store your document content. The only MCP server that cross-checks a full document package against named international standards in a single call -- returns structured conflict flags, not prose. Paid tier only -- no free access. Pro: 500 calls/month at $29/month. Enterprise: 5,000 calls/month at $199/month. kordagencies.com.

ParametersJSON Schema
NameRequiredDescriptionDefault
documentsYesArray of 2 to 20 related documents to assess individually and cross-check against each other. Each document must have a unique label.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it clarifies that the tool uses AI reasoning (not a database lookup), does not log or store document content, and returns structured conflict flags. This aligns with the readOnlyHint annotation. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains relevant information but is verbose, including marketing language and pricing details that may distract from core guidance. The purpose is front-loaded, but the length could be reduced without losing essential instructions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description thoroughly explains the tool's outputs (individual verdicts, cross-document conflict flags) and covers integration details (pricing, no logging). It provides a complete picture for using the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds minor context about unique labels and use of fields for conflict reporting, but it mostly restates schema requirements. No substantial new semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: cross-checking consistency of a set of trade documents before critical actions like payment release. It specifies the exact use case and distinguishes itself from the sibling tool 'check_document' by focusing on multi-document packages.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly instructs when to call the tool ('when your agent has received a set of related documents') and provides context on why it's needed. It does not explicitly state when not to use it, but the sibling tool implies the single-document case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources