Kamy
Server Details
Document API for AI-native software: render PDFs, e-sign, PAdES-seal, and verify.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Kamy-Development/kamy-plugin
- GitHub Stars
- 0
- Server Listing
- Kamy-plugin
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 10 of 10 tools scored. Lowest: 3.3/5.
Each tool has a clear, distinct purpose: Q&A, e-signature, integration code, API instructions, SDK install, listing requests, listing templates, PKI signing, PDF rendering, and verification. No two tools overlap in functionality.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., create_signature_request, render_pdf). No mixing of styles or ambiguous verbs.
10 tools is well-scoped for a PDF-focused server, covering core operations (render, sign, verify) plus auxiliary integration and support tools. Each tool earns its place without being excessive.
The tool surface covers PDF rendering, two signing methods (e-signature and PAdES), verification, signature request management, template listing, and developer integration help. No obvious gaps for the intended domain.
Available Tools
10 toolsask_kamyAsk KamyARead-onlyInspect
Ask Kamy Brain a question about Kamy usage, templates, plans, or errors. Sends the question to Kamy's public assistant endpoint and returns a paragraph answer.
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | The question to ask Kamy about — how to render a template, why a render failed, what plan to pick, etc. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context about the endpoint type (public assistant) and response format (paragraph), which complements the readOnlyHint annotation. No contradictions; however, it could mention rate limits or authentication needs if applicable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loaded with the core action, and contains no unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter, no output schema, and read-only annotations, the description sufficiently covers the purpose, parameters, and behavior. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'question' already has a detailed description in the schema (100% coverage), so the tool description adds minimal new meaning beyond giving examples. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool asks questions about Kamy usage, templates, plans, or errors via a public assistant endpoint, distinguishing it from sibling tools focused on PDF operations, signatures, and API keys.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies when to use the tool (for Kamy-related questions) but does not explicitly state when not to use it or mention alternatives. Given sibling tools are all different, the guidance is clear but not exhaustive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_signature_requestSend for e-signatureAInspect
Send a previously rendered PDF to a signer for e-signature. Creates a signature request and may email the signer. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| message | No | Optional message rendered in the email invitation body. | |
| position | No | Optional stamp position in PDF points (72 dpi, origin bottom-left). Defaults to bottom-right of last page sized 220×64 pt. | |
| renderId | Yes | Render UUID returned by render_pdf or any /v1/render call. The render's PDF is the document the signer will receive. | |
| signerName | Yes | Recipient full name. Must be typed verbatim by the signer to confirm intent. | |
| signerEmail | Yes | Recipient email address. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint=false, destructiveHint=false, openWorldHint=true) already indicate mutation and potential side effects. The description adds that the tool may email the signer, which is useful but expected. No additional behavioral details like idempotency or rate limits are given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences that front-load the core purpose. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters, a nested object, and no output schema, the description is thin. It lacks information about return values and usage scenarios, relying heavily on the schema for parameter details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents parameters well. The description does not add any extra meaning about parameters beyond what is in the input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool sends a previously rendered PDF to a signer for e-signature, using specific verbs and resources. It effectively distinguishes itself from sibling tools like render_pdf (which creates the PDF) and pki_sign_pdf (digital signature).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool requires a renderId from a previous render and authentication, but it does not explicitly specify when to use this tool over alternatives like pki_sign_pdf or list_signature_requests. No exclusions or context for switching are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_integration_codeGenerate integration codeARead-onlyInspect
Generate copy-paste integration code for a specific Kamy template and framework.
| Name | Required | Description | Default |
|---|---|---|---|
| template | Yes | Template slug (e.g., 'invoice') | |
| framework | Yes | Target framework |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. Description adds that output is 'copy-paste integration code', which aligns. No additional behavioral traits disclosed beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no filler, front-loaded with purpose. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple parameters and annotations, description is adequate. Could hint at output format, but 'copy-paste integration code' implies code snippet. No output schema, so minimal burden.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are fully documented. Description does not add new meaning beyond echoing the schema's template and framework concepts. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'generate' and resource 'integration code', specific context 'Kamy template and framework'. Distinguishes from siblings like list_templates and install_sdk by focusing on code generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage: when integration code is needed for a template and framework. No explicit when-not or alternatives given, but context signals (sibling tools) allow inference.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_key_instructionsGet API key instructionsARead-onlyInspect
Return step-by-step instructions for creating a Kamy API key in the dashboard. Does not open the browser.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds one behavioral trait ('Does not open the browser'), but no further details about side effects or limitations. With annotations covering safety, the description provides moderate additional transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two short sentences, no filler, front-loaded with purpose, and every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and safety annotations, the description covers the essential purpose and a key behavioral trait. It could optionally mention output format, but is still adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters, so schema coverage is 100%. The description does not need to add parameter meaning, and it does not. Baseline for 0 parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return step-by-step instructions for creating a Kamy API key in the dashboard.' It uses a specific verb and resource, and distinguishes from siblings with the negative statement 'Does not open the browser.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining API key creation instructions, but does not explicitly mention when to avoid using it or compare to sibling tools like 'ask_kamy' which may also provide instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
install_sdkInstall Kamy SDKBRead-onlyInspect
Get install commands and setup code for @kamydev/sdk in your framework.
| Name | Required | Description | Default |
|---|---|---|---|
| framework | Yes | The framework the user is working with | |
| packageManager | No | Package manager to use | npm |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, and the description ('Get install commands') aligns with a read-only operation. The description does not add behavioral details beyond what annotations provide, but it does not contradict them either. No annotation contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 13 words, extremely concise with no wasted words. It front-loads the action and resource immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple informational tool with 2 parameters and no output schema, the description is minimally adequate. It does not explain what 'setup code' includes or how results are returned, but given the tool's simplicity, it covers the basics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds minimal extra meaning ('for @kamydev/sdk in your framework') which is redundant. Baseline 3 is appropriate as the schema already documents the parameters adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides install commands and setup code for a specific SDK, which indicates the tool's purpose. It distinguishes from sibling tools like 'get_api_key_instructions' which serve different needs. However, the verb 'Get' is passive; an action-oriented phrasing like 'Provide install commands' would be clearer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives no guidance on when to use this tool versus alternatives such as 'generate_integration_code' or 'ask_kamy'. It lacks explicit when-to-use or when-not-to-use context, leaving the agent to infer based on name and purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_signature_requestsList signature requestsARead-onlyInspect
List signature requests created by this Kamy account, newest first. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 50. | |
| offset | No | Default 0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds context beyond annotations: 'newest first' ordering and 'created by this Kamy account' scope. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise, informative, and front-loaded. Every word adds value, with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with pagination parameters and no output schema, the description covers the essential behavioral aspects: ordering, scope, and authentication. Combined with annotations, the agent has sufficient context to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with clear descriptions for both parameters (limit: max 100, default 50; offset: min 0, default 0). The description adds no additional parameter details, which is acceptable when schema is sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (list), the resource (signature requests), and specifies ordering (newest first) and scope (created by this account). It distinguishes from sibling tools like create_signature_request or verify_pdf_signature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions authentication requirement and implies listing all requests. It does not explicitly state when not to use this tool, but the purpose is clear and no alternative listing tool exists among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_templatesList templatesARead-onlyInspect
List Kamy's public system PDF templates. No authentication required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and destructiveHint, so the description's addition of 'No authentication required' provides some context beyond the schema. However, it does not disclose other behavioral aspects like pagination or response format. With good annotation coverage, this is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, with the main action first and additional information second. Every word earns its place, and there is no superfluous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no parameters, no output schema, and informative annotations, the description provides the essential purpose and authentication context. It is mostly complete, though it could optionally hint at the response format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so the description has no need to add parameter semantics. Per guidelines, baseline score for 0 parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (list) and the specific resource (Kamy's public system PDF templates), distinguishing it from sibling tools like list_signature_requests. The note about no authentication adds clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context (listing templates without authentication) but does not explicitly state when to use this tool versus alternatives like list_signature_requests. The usage is implied but not guided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pki_sign_pdfPKI-sign PDFAInspect
Cryptographically sign an existing render with PAdES and return the signed PDF URL plus verify URL. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| reason | No | Optional /Sig dictionary Reason — surfaced in Acrobat's signature panel. ASCII-coerced server-side. | |
| location | No | Optional /Sig dictionary Location. | |
| renderId | Yes | Render UUID returned by render_pdf or any /v1/render call. The PDF will be sealed with a Kamy-issued X.509 leaf certificate. | |
| signerName | No | Override the signer display name. Defaults to the account's full_name. | |
| signerEmail | No | Override the signer email. Defaults to the account's email. | |
| withTimestamp | No | When false, skip the RFC 3161 timestamp call (PAdES-B-B instead of B-T). Default: true. | |
| withRevocationInfo | No | When false, skip embedding the Kamy CA CRL into the PKCS#7 SignedData (PAdES-B-T instead of B-LT). Online verifiers can still fetch the CRL via the Distribution Point on the leaf cert. Default: true. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate non-read-only and non-destructive behavior. The description adds value by specifying the signing standard (PAdES), the return URLs, and authentication requirement. It does not contradict annotations and provides behavioral context beyond them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence conveying the core action, output, and prerequisite (authentication). No filler or irrelevant details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, and the description only mentions the return of two URLs. Missing details include URL lifetime, storage behavior, error cases, and process duration. For a cryptographically signing tool, more contextual completeness is expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so baseline is 3. The description adds minimal parameter-specific context (only mentions renderID implicitly as required and cert issuer). The primary value comes from the schema itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool cryptographically signs an existing render with PAdES and returns two URLs (signed PDF and verify URL). It distinguishes itself from siblings like verify_pdf_signature by specifying the action as 'sign' and technology as 'PAdES'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a renderID from render_pdf is available and authentication is required, but it does not explicitly state when to use this tool versus alternatives like create_signature_request, nor does it provide exclusions or prerequisites beyond renderID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
render_pdfRender PDFAInspect
Render a PDF from a Kamy template and data. Creates a render in the user's Kamy account and returns a signed URL. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Data to populate the template | |
| format | No | a4 | |
| template | Yes | Template slug (e.g., 'invoice') or template UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false, and description confirms mutation by noting 'Creates a render'. Lacks detail on side effects, permissions, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences, no fluff. Action and outcome front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Missing details like expiration of signed URL, no output schema, and no example of data structure. Adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% (data and template described, format only enum). Description does not add extra meaning or examples beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool renders a PDF from a Kamy template and data, and specifies the outcome (signed URL). This distinguishes it from other tools like list_templates or pki_sign_pdf.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Only mentions 'Requires authentication.' No guidance on when to use this tool versus alternatives like pki_sign_pdf or create_signature_request.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_pdf_signatureVerify PDF signatureARead-onlyInspect
Compute the kamy.dev/verify URL for a PDF without making a Kamy API call. Pass the PDF as base64; the MCP Worker hashes it in-memory and does not store or forward it.
| Name | Required | Description | Default |
|---|---|---|---|
| pdfBase64 | Yes | Base64-encoded PDF bytes. The MCP Worker hashes the file in-memory and does not store or forward it. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context beyond annotations by stating the worker hashes in-memory and does not store or forward the PDF, which is a useful behavioral detail. Annotations already indicate read-only and non-destructive, so the description enhances transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no wasted words. Every sentence adds value: first states what it does, second adds behavioral transparency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with low complexity (1 param, no output schema), the description is largely complete. It explains the core function and key behavior. However, it could mention the output format (the URL) explicitly, as there is no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage for the single parameter, and the description repeats the same information. No additional meaning is added beyond what the schema provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (compute the kamy.dev/verify URL) and the resource (PDF), and distinguishes it from sibling tools like pki_sign_pdf or create_signature_request by specifying it does not make an API call.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for lightweight signature verification without API calls, but does not explicitly state when to use vs. alternatives or provide exclusions. It lacks explicit guidance on when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.