Kamy
Server Details
Other PDF MCP servers ask you to install Chrome, manage a local renderer, and ship templates yourself. Kamy MCP is a hosted endpoint — point any MCP client at https://mcp.kamy.dev/mcp, paste an API key, and start asking your AI to "generate an invoice for Acme Corp". Eight production-ready templates ship with the service (invoice, quote, receipt, contract, agreement, certificate, report, shipping-label)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 6 of 6 tools scored.
Each tool has a clear and distinct purpose: ask_kamy for Q&A about Kamy, generate_integration_code for code snippets, get_api_key_instructions for API key setup, install_sdk for SDK install commands, list_templates for template listing, and render_pdf for PDF rendering. The description of ask_kamy explicitly redirects PDF-related queries to render_pdf, eliminating ambiguity.
All tool names follow a consistent verb_noun pattern in snake_case: ask_kamy, generate_integration_code, get_api_key_instructions, install_sdk, list_templates, render_pdf. No mixing of conventions.
With 6 tools, the set is well-scoped for the apparent domain of PDF generation and SDK integration. Each tool covers a core function without being excessive or insufficient.
The tool surface covers the main workflows: learning about Kamy, listing templates, rendering PDFs, generating integration code, installing SDK, and API key management. However, missing template management (create, update, delete) is a minor gap that could limit some workflows.
Available Tools
10 toolsask_kamyAInspect
Ask Kamy Brain a question about Kamy itself — how to use it, why something failed, which plan to pick, what fields a template needs. Returns a paragraph answer drawing on the full Kamy documentation. For rendering a PDF, use render_pdf instead.
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | The question to ask Kamy about — how to render a template, why a render failed, what plan to pick, etc. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It states returns a paragraph answer drawing on full documentation, which is clear. Lacks potential caveats (e.g., only answers from docs), but still good.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no fluff. First sentence defines purpose with examples, second gives alternative. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with one parameter and no output schema, description fully covers what agent needs: what it does, what it returns, and when to use alternative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (parameter already described). Tool description rephrases examples but doesn't add new semantic value beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it answers questions about Kamy itself, with concrete examples like 'how to use it, why something failed, which plan to pick, what fields a template needs'. It distinguishes from sibling render_pdf.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use for asking about Kamy and provides a direct alternative ('For rendering a PDF, use render_pdf instead'). No ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_signature_requestAInspect
Send a previously rendered PDF to a signer for e-signature. Requires the render_id returned by render_pdf. Returns sign_url + sign_token; the signer is also emailed the link if RESEND is configured server-side.
| Name | Required | Description | Default |
|---|---|---|---|
| message | No | Optional message rendered in the email invitation body. | |
| position | No | Optional stamp position in PDF points (72 dpi, origin bottom-left). Defaults to bottom-right of last page sized 220×64 pt. | |
| renderId | Yes | Render UUID returned by render_pdf or any /v1/render call. The render's PDF is the document the signer will receive. | |
| signerName | Yes | Recipient full name. Must be typed verbatim by the signer to confirm intent. | |
| signerEmail | Yes | Recipient email address. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses return values (sign_url, sign_token) and email behavior condition. No annotations are provided, so the description carries the full burden. It covers key behavioral aspects but could mention side effects like state changes or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. Purpose is front-loaded, followed by essential details. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters (including nested object), no output schema, and no annotations, the description covers purpose, prerequisite, return values, and a key parameter nuance. Lacks detail on return format but is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good descriptions. The description adds value beyond the schema, notably the requirement that signerName must be typed verbatim and the default position for the stamp. This nuance aids correct usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool sends a previously rendered PDF for e-signature, specifies the prerequisite (render_id from render_pdf), and distinguishes from siblings like render_pdf (rendering) and list_signature_requests (listing).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: after obtaining a render_id from render_pdf. Mentions email behavior depending on server configuration. Does not explicitly state when not to use or list direct alternatives, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_integration_codeAInspect
Generate copy-paste integration code for a specific template and framework (Next.js, Remix, Express, Fastify, Hono, FastAPI, Flask).
| Name | Required | Description | Default |
|---|---|---|---|
| template | Yes | Template slug (e.g., 'invoice') | |
| framework | Yes | Target framework |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but only states the tool generates code. It fails to disclose whether the operation is read-only, destructive, requires authentication, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence, front-loaded with the core action, lists frameworks efficiently. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the output is copy-paste integration code. It could improve by mentioning the output format or that it's a code snippet, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already documents both parameters (template and framework) with descriptions and enum. The description adds only high-level context (copy-paste integration) without deeper semantic meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Generate copy-paste integration code' and lists specific frameworks, making the tool's purpose explicit. It is easily distinguished from siblings like 'list_templates' and 'render_pdf'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for generating integration code but provides no explicit guidance on when to use this tool versus alternatives, no exclusions, and no context for prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_key_instructionsAInspect
Returns step-by-step instructions for creating a Kamy API key in the dashboard. Does not open the browser.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explicitly states that the tool only returns instructions and does not perform any action like opening a browser. With no annotations, this fully discloses the non-destructive, read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences that front-load the primary purpose and add a clarifying note. No superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no parameters, no output schema, and no annotations, the description is sufficient. It explains the return value (step-by-step instructions) and the behavior (does not open browser), making it complete for its simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so the description cannot add meaning beyond the schema. The baseline for zero parameters is 4, and the description does not need to elaborate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: returning step-by-step instructions for creating a Kamy API key. It distinguishes from sibling tools (e.g., generate_integration_code, install_sdk) which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly suggests using this tool when instructions are needed, and clarifies it does not open the browser. However, it lacks explicit guidance on when to choose this over alternatives or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
install_sdkAInspect
Get install commands and setup code for @kamydev/sdk in your framework.
| Name | Required | Description | Default |
|---|---|---|---|
| framework | Yes | The framework the user is working with | |
| packageManager | No | Package manager to use | npm |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full responsibility. It indicates a safe 'get' operation returning commands and code, but does not elaborate on potential side effects (e.g., executing commands) or permission requirements. The behavior is minimal but not misleading.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that conveys essential information without unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (retrieving commands), the description covers the core purpose. However, it could mention the output format or that commands are non-executable, but overall it is sufficiently complete for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters adequately described (framework and packageManager enums). The description adds no additional semantic value beyond the schema, achieving the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves install commands and setup code for a specific SDK ('@kamydev/sdk') tailored to a framework, which distinguishes it from siblings like 'generate_integration_code' or 'list_templates'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives (e.g., 'generate_integration_code'). No context on prerequisites or exclusion cases is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_signature_requestsAInspect
List signature requests created by this account, newest first. Returns status (pending/signed/voided/expired) and the signed PDF path once complete.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 50. | |
| offset | No | Default 0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses ordering and return fields but lacks details on side effects, rate limits, authentication requirements, or pagination behavior beyond parameters. Decent but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. First sentence states the action and ordering, second sentence details the return. Very concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with no output schema. Description covers action and returns. Could mention default limit (50) and note about pagination, but is largely complete for a list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and both parameters have descriptions. The description does not add extra meaning beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (list signature requests), the scope (created by this account), ordering (newest first), and return values (status and signed PDF path). It distinguishes itself from siblings like 'create_signature_request' and 'list_templates'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description implies usage for listing and status checking, but does not state when not to use it or mention alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_templatesAInspect
List all available PDF templates (system + your custom templates). No auth required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that no authentication is needed and lists both system and custom templates. This is sufficient for a read-only listing with no side effects, though it could mention the output format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short sentences, no wasted words, and front-loads the key action. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description provides essential context: what it lists and that no auth is needed. It could optionally describe the output format, but overall it is sufficiently complete for a simple list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and the input schema is empty. Per guidelines, 0 parameters gives a baseline of 4. The description does not need to add parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists all PDF templates, both system and custom, using a specific verb 'List' and resource 'PDF templates'. It distinguishes from sibling tools like render_pdf and generate_integration_code, which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes 'No auth required', indicating it can be used freely. While it doesn't explicitly contrast with siblings, the tool is a straightforward listing operation, and sibling tools have distinct functionalities, so confusion is unlikely.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pki_sign_pdfAInspect
Cryptographically sign an existing render with PAdES (X.509 certificate). Returns the signed PDF URL plus a verify_url recipients can paste into kamy.dev/verify. Signed under Kamy's in-house CA — verification is via the Kamy verify page.
| Name | Required | Description | Default |
|---|---|---|---|
| reason | No | Optional /Sig dictionary Reason — surfaced in Acrobat's signature panel. ASCII-coerced server-side. | |
| location | No | Optional /Sig dictionary Location. | |
| renderId | Yes | Render UUID returned by render_pdf or any /v1/render call. The PDF will be sealed with a Kamy-issued X.509 leaf certificate. | |
| signerName | No | Override the signer display name. Defaults to the account's full_name. | |
| signerEmail | No | Override the signer email. Defaults to the account's email. | |
| withTimestamp | No | When false, skip the RFC 3161 timestamp call (PAdES-B-B instead of B-T). Default: true. | |
| withRevocationInfo | No | When false, skip embedding the Kamy CA CRL into the PKCS#7 SignedData (PAdES-B-T instead of B-LT). Online verifiers can still fetch the CRL via the Distribution Point on the leaf cert. Default: true. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return values, trust level, and optional timestamp behavior. Lacks details on side effects (e.g., if original render is modified) but is sufficiently transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with front-loaded purpose, clear return values, and future context. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description fully explains return values (URLs). Context of siblings and clear purpose make it complete for a signing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with each parameter described. Description adds no extra parameter semantics beyond what the schema already provides, adhering to the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool cryptographically signs an existing render using PAdES, specifies output (signed PDF URL + verify_url), and distinguishes from siblings like verify_pdf_signature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage after rendering, but lacks explicit when-to-use vs alternatives like create_signature_request or verify_pdf_signature. Provides context on trust level (self-signed vs future AATL).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
render_pdfAInspect
Render a PDF from a template and data. Returns a signed URL. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Data to populate the template | |
| format | No | a4 | |
| template | Yes | Template slug (e.g., 'invoice') or template UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It mentions authentication and returning a signed URL, but lacks disclosure about side effects, rate limits, or resource consumption beyond the rendering action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with the core action and output. No extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers purpose, output type, and a key requirement. Lacks details on error handling, output quality, or template constraints, but sufficient for a straightforward rendering tool with three parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description adds no additional meaning beyond schema; schema descriptions cover 67% of parameters. For the 'data' parameter (object with additionalProperties), no further guidance on structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states action: render a PDF from template and data, plus output type (signed URL) and requirement (authentication). Distinguishable from siblings like list_templates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage (when you need a PDF from template/data) but no explicit when-not-to-use or alternatives. The authentication requirement is noted but no context on prerequisites like template existence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_pdf_signatureBInspect
Compute the kamy.dev/verify URL for a PDF without making an API call. Pass the PDF as base64; the SHA-256 is computed locally and never uploaded.
| Name | Required | Description | Default |
|---|---|---|---|
| pdfBase64 | Yes | Base64-encoded PDF bytes. The hash is computed locally — the file is never uploaded. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It discloses the key trait that the hash is computed locally and the PDF is never uploaded. However, it omits other traits like error handling, return format, or what the URL signifies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at two sentences, front-loaded with the action, and contains no extraneous words. It could be slightly improved with a clearer separation of action and behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers the core action and a key behavioral aspect. However, it fails to specify the return value (the URL) or any output format, making it slightly incomplete for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The parameter 'pdfBase64' is fully described in the schema (100% coverage), including the local computation detail. The description adds almost no new parameter-specific meaning, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool computes a URL for PDF signature verification without an API call. The verb 'compute' and resource 'kamy.dev/verify URL' are specific. However, it does not explicitly confirm that the actual verification happens or what the output represents, leaving slight ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when offline URL computation is desired by stating 'without making an API call'. It does not explicitly compare to siblings like 'pki_sign_pdf' or 'render_pdf', nor provide clear when-to-use or when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!