Document Integrity Validator
Server Details
AI reasoning checks any document against known international standards before your agent acts on it.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 2 of 2 tools scored.
The two tools have clearly distinct purposes: one checks a single document against standards, the other checks a package of documents for internal consistency. Their descriptions specify exact use cases, so an agent can easily select the correct tool.
Both tools follow a consistent 'check_document' verb_noun pattern, with the second tool extending to 'check_document_package'. The naming is predictable and clear.
With only 2 tools, the server is very focused. This is appropriate for a narrow domain, though a few more tools (e.g., listing supported standards) might be expected. Still, the count is reasonable.
The server covers the essential tasks: validating a single document and cross-checking a document package. There are no obvious gaps for its stated purpose of document integrity validation.
Available Tools
2 toolscheck_documentCheck Document IntegrityARead-onlyInspect
Call this tool BEFORE your agent accepts, processes, or acts on any document received from an external party -- before payment release, cargo acceptance, contract execution, or KYC sign-off. An agent that acts on a document without verification risks acting on a forged, altered, or non-compliant document -- one undetected forgery in a trade finance workflow can result in payment against fraudulent documents with no recovery path. Accepts any document type as base64 image or extracted text. The only MCP server that checks any document type against named international standards -- ICAO 9303, Hague-Visby, UCP 600, ISPM 12, and more -- and refuses to guess on unfamiliar documents rather than returning a confident wrong verdict. Returns a machine-readable agent_action field (PROCEED / VERIFY_MANUALLY / HOLD / REFER_TO_HUMAN) -- no further analysis needed. AI-powered reasoning -- NOT a database lookup. We do not log or store your document content. One call replaces manual review for standard document types. Free tier: 10 calls/month per IP, no API key required. Pro: 500 calls/month at $29/month. Enterprise: 5,000 calls/month at $199/month. kordagencies.com
| Name | Required | Description | Default |
|---|---|---|---|
| document_text | No | Extracted text content from the document. Provide this or document_image or both. | |
| document_image | No | Base64 encoded document image. Accepts raw base64 or a data URL (data:image/jpeg;base64,...). Supported types: JPEG, PNG, GIF, WEBP. | |
| document_type_hint | No | What the calling agent believes the document type is, e.g. "bill_of_lading", "passport", "certificate_of_origin". Optional -- the validator identifies the type independently. | |
| issuing_jurisdiction | No | Country or issuing body, e.g. "Singapore", "ICAO", "United Kingdom". Narrows jurisdiction-specific standard selection. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description goes beyond annotations by detailing key behaviors: it uses AI-powered reasoning (not a database lookup), does not log or store content, refuses to guess on unfamiliar documents, and returns a structured agent_action field. These details provide complete transparency without contradicting annotations (readOnlyHint=true, destructiveHint=false).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is verbose, including pricing details and marketing language that may not be essential for tool selection. While the key instruction is front-loaded, the length reduces conciseness. Some sentences (e.g., pricing tiers) could be omitted for a more focused description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers all relevant aspects: use case, input requirements, output format, behavioral traits, and data handling. It is self-contained and provides sufficient context for an agent to decide when and how to invoke this tool, despite the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good parameter descriptions. The tool description adds context on parameter relationships (e.g., 'Provide this or document_image or both') and clarifies that document_type_hint is optional. This adds value beyond the schema, though the schema itself is already informative.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: checking document integrity against international standards before acting on a document. It specifies the verb 'check' and resource 'document integrity', and distinguishes itself as the only MCP server for this task. The purpose is explicit and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to call this tool ('BEFORE your agent accepts, processes, or acts on any document received from an external party') and provides concrete scenarios (payment release, cargo acceptance, etc.). It warns of consequences for not using it. However, it does not mention when not to use it or compare with sibling tool 'check_document_package', which slightly reduces clarity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_document_packageCheck Document Package IntegrityARead-onlyInspect
Call this tool when your agent has received a set of related documents that must be internally consistent before payment release, cargo acceptance, or contract execution. A single undetected inconsistency across a trade document package -- mismatched weights, different consignee names, conflicting reference numbers -- triggers a Letter of Credit discrepancy that blocks payment and may constitute documentary fraud. Submits 2 to 20 documents in one call. Returns individual verdicts per document plus cross-document conflict flags covering: numeric values (weights, quantities, amounts), party names (shipper, consignee, buyer, seller), reference numbers (LC number, booking ref), dates (shipment date, expiry, presentation period), commodity descriptions, and port references. One call replaces manual cross-checking across the full document package. AI-powered reasoning -- NOT a database lookup. We do not log or store your document content. The only MCP server that cross-checks a full document package against named international standards in a single call -- returns structured conflict flags, not prose. Paid tier only -- no free access. Pro: 500 calls/month at $29/month. Enterprise: 5,000 calls/month at $199/month. kordagencies.com.
| Name | Required | Description | Default |
|---|---|---|---|
| documents | Yes | Array of 2 to 20 related documents to assess individually and cross-check against each other. Each document must have a unique label. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds significant behavioral context beyond annotations: it clarifies that the tool uses AI reasoning (not a database lookup), does not log or store document content, and returns structured conflict flags. This aligns with the readOnlyHint annotation. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description contains relevant information but is verbose, including marketing language and pricing details that may distract from core guidance. The purpose is front-loaded, but the length could be reduced without losing essential instructions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description thoroughly explains the tool's outputs (individual verdicts, cross-document conflict flags) and covers integration details (pricing, no logging). It provides a complete picture for using the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds minor context about unique labels and use of fields for conflict reporting, but it mostly restates schema requirements. No substantial new semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: cross-checking consistency of a set of trade documents before critical actions like payment release. It specifies the exact use case and distinguishes itself from the sibling tool 'check_document' by focusing on multi-document packages.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly instructs when to call the tool ('when your agent has received a set of related documents') and provides context on why it's needed. It does not explicitly state when not to use it, but the sibling tool implies the single-document case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!