Prince Cloud
Server Details
Convert Markdown, HTML, and web pages to high-quality PDF with Prince.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
3 toolshtml_to_pdfARead-onlyInspect
Convert HTML content to PDF using Prince.
Args:
content: HTML content to convert
style: Additional CSS to apply (inline stylesheet content)
page_size: Page size (e.g., A4, letter)
page_margin: Page margins (e.g., 20mm)
javascript: Enable JavaScript execution
pdf_profile: PDF profile (e.g., PDF/A-3b, PDF/UA-1)
output_filename: Output filename for the PDF (default: "output.pdf")| Name | Required | Description | Default |
|---|---|---|---|
| style | No | ||
| content | Yes | ||
| page_size | No | ||
| javascript | No | ||
| page_margin | No | ||
| pdf_profile | No | ||
| output_filename | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate non-destructive read-only operation; description adds context that conversion uses 'Prince' engine, which is valuable behavioral context. However, despite readOnlyHint=true, description doesn't clarify whether output_filename writes to server filesystem (side effect) or returns data stream, leaving ambiguity about actual behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with clear purpose statement. Args section is structured and scannable, efficiently documenting 7 parameters given total schema coverage failure. Slightly verbose but necessary given the constraint; no redundant sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers all input parameters well, but lacks output schema and description doesn't specify return format (file path, binary content, or base64 string) or error handling for invalid HTML/CSS. Moderate complexity tool needs return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, but description fully compensates by documenting all 7 parameters in the Args section with types, purposes, and examples (e.g., 'e.g., A4, letter', 'e.g., 20mm', 'default: output.pdf'). Without this Args section, parameters would be opaque.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Convert') and resources ('HTML content to PDF'). Explicitly names the engine ('Prince') which distinguishes implementation details. Differentiates from siblings markdown_to_pdf and url_to_pdf by specifyingHTML content as the input source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through naming ('html_to_pdf') and parameter listing, but lacks explicit when/when-not guidance comparing it to sibling tools markdown_to_pdf and url_to_pdf. No mention of prerequisites like Prince installation availability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markdown_to_pdfBRead-onlyInspect
Convert Markdown content to PDF using Prince.
Args:
content: Markdown content to convert
style: Additional CSS to apply (inline stylesheet content)
page_size: Page size (e.g., A4, letter)
page_margin: Page margins (e.g., 20mm)
javascript: Enable JavaScript execution
pdf_profile: PDF profile (e.g., PDF/A-3b, PDF/UA-1)
output_filename: Output filename for the PDF (default: "output.pdf")| Name | Required | Description | Default |
|---|---|---|---|
| style | No | ||
| content | Yes | ||
| page_size | No | ||
| javascript | No | ||
| page_margin | No | ||
| pdf_profile | No | ||
| output_filename | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context by naming the external dependency (Prince) which aligns with openWorldHint annotation. However, it fails to disclose where the resulting PDF is stored (filesystem path vs returned content) or side effects despite the mutation implied by file creation, relying solely on annotations for safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with the purpose statement front-loaded, followed by the Args documentation. Each parameter earns its place. The formatting is clear, though the Args block is slightly verbose compared to inline schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 7-parameter tool with external dependencies, the description adequately covers inputs but lacks explanation of output behavior (return value vs file write location) and error handling. Without an output schema, the description should clarify what the tool returns or produces.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the Args block compensates effectively by documenting all 7 parameters with concise meanings and helpful examples (e.g., 'A4, letter', 'PDF/A-3b'). It provides sufficient semantic context missing from the bare schema titles, though could specify the content format requirements (e.g., 'valid Markdown string').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the conversion action (Markdown to PDF) and specifies the engine used (Prince), which distinguishes it from sibling tools html_to_pdf and url_to_pdf by input type. However, it could explicitly state when to prefer this over converting Markdown to HTML first, or direct comparisons to siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the siblings (html_to_pdf, url_to_pdf) or prerequisites like Prince installation. There are no explicit when-to-use or when-not-to-use conditions specified.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
url_to_pdfARead-onlyInspect
Fetch a URL and convert it to PDF using Prince.
Args:
url: URL of a document to fetch and convert
style: Additional CSS to apply (inline stylesheet content)
page_size: Page size (e.g., A4, letter)
page_margin: Page margins (e.g., 20mm)
javascript: Enable JavaScript execution
pdf_profile: PDF profile (e.g., PDF/A-3b, PDF/UA-1)
output_filename: Output filename for the PDF (default: "output.pdf")| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| style | No | ||
| page_size | No | ||
| javascript | No | ||
| page_margin | No | ||
| pdf_profile | No | ||
| output_filename | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true and destructiveHint=false. The description adds valuable context by identifying Prince as the rendering engine and implying network activity (fetching). However, it omits critical behavioral details such as error handling for unreachable URLs, timeout behavior, where the output PDF is stored/returned, and whether Prince requires specific system resources or licenses.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action sentence, followed by a structured Args section. While the Args block adds length, it is necessary given the complete lack of schema descriptions. The organization is logical and every sentence earns its place by conveying essential parameter constraints and examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (network fetching, external Prince dependency, 7 parameters, PDF generation), the description adequately covers parameter semantics but lacks crucial output information. With no output schema present, the description should specify what the function returns (file path, buffer, or URL) and any side effects or persistence details of the generated PDF.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (properties only have titles like 'Url', 'Style'), the description fully compensates by documenting all 7 parameters in the Args section. It provides semantic meaning for each (e.g., 'Additional CSS to apply', 'Enable JavaScript execution') and includes concrete examples for page_size ('A4, letter'), page_margin ('20mm'), and pdf_profile ('PDF/A-3b, PDF/UA-1').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Fetch a URL and convert it to PDF), identifies the target resource (URL content), and distinguishes from siblings (html_to_pdf, markdown_to_pdf) by specifying URL fetching rather than direct content input. It also notes the specific technology (Prince), which helps identify capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Fetch a URL' implies the intended use case (when you have a URL rather than raw HTML/Markdown), providing implied differentiation from the html_to_pdf and markdown_to_pdf siblings. However, it lacks explicit guidance on when to choose this tool over alternatives or any prerequisites/limitations for URL fetching.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!