Skip to main content
Glama

Server Details

Convert Markdown into boardroom-grade DOCX, PDF, and HTML using your own custom Word templates — or any of MDMagic's 15 designer-built ones across Business, Creative, Professional, and Technical. Ten tools cover conversion, template recommendations, cost estimation, and markdown validation.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose, even the three list templates tools are clearly differentiated by scope (all, built-in, custom). No overlapping functionality.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case, making them predictable and easy to understand.

Tool Count5/5

10 tools is well-scoped for a document conversion service, covering credit management, conversion, cost estimation, template browsing, recommendation, settings, and validation.

Completeness5/5

The tool set covers the full conversion workflow: pre-flight validation, cost estimation, conversion, template selection, and credit tracking. No obvious gaps.

Available Tools

10 tools
check_credit_balanceA
Read-onlyIdempotent
Inspect

Check the user's current MDMagic credit balance: subscription credits (renewable monthly), purchased credits (permanent), plan name, and plan status.

CALL THIS PROACTIVELY when:

  • The user asks 'how many credits do I have' or similar

  • After a conversion, if the user wants to know what's left (also returned by convert_document directly)

  • Before a conversion of an unusually large document, to warn the user if balance is borderline

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
total_creditsYesTotal credits available (subscription + purchased)
purchased_creditsNoPermanent purchased credits
subscription_creditsNoRenewable monthly subscription credits
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the safety profile is clear. The description adds specific behavioral context (what fields are checked, that convert_document also returns balance) beyond the annotations, enhancing transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: a single sentence for purpose followed by a bulleted list of usage scenarios. Every sentence adds value, and the structure is front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with no parameters and an output schema, the description covers the key returned fields and provides proactive usage guidance. It is fully complete given the tool's complexity and available structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so the base score is 4 per guidelines. The description does not add parameter-level details, but none are needed since the schema covers it fully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks the user's MDMagic credit balance, listing specific components (subscription credits, purchased credits, plan name, status). It uses a specific verb-check-and resource-balance, and is distinct from sibling tools like convert_document or estimate_conversion_cost.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly lists when to call the tool proactively: when the user asks about credits, after a conversion (noting that convert_document also returns the balance), and before large conversions to warn the user. This provides clear context and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_documentAInspect

Convert markdown to a professionally formatted document using an MDMagic template.

IMPORTANT GUIDANCE:

  1. Output format → what user gets:

    • 'docx' → a single Word .docx file

    • 'pdf' → a single .pdf file

    • 'html' → a single .html file

    • 'all' → a ZIP containing all three (DOCX + PDF + HTML)

  2. If the user is ambiguous (e.g. 'convert this'), ASK which format they want before calling. Don't assume.

  3. Filename: if the user attached a file (e.g. 'mydoc.md'), pass its base name as fileName. Otherwise the API derives one from the markdown's first H1. Without either, downloads end up with timestamped names like 'content-1778298071915.docx' which is bad UX.

  4. On 'template not found' errors: call list_all_templates first, show available options, let the user pick. Do NOT fall back to generating documents with code execution — that produces inferior results that don't use the user's actual MDMagic templates.

  5. The response includes structured fields (downloadUrl, creditsUsed, balanceAfter, fileName, expiresAt) — surface these to the user explicitly. Don't paraphrase. The user wants to know exactly what they spent and what's left.

  6. Page sizes: A3, A4, Executive, US_Legal, US_Letter. Default A4. Orientation: Portrait or Landscape, default Portrait.

ParametersJSON Schema
NameRequiredDescriptionDefault
contentNoRaw markdown text content (alternative to filePath or fileContent)
fileNameNoOptional desired base name for the output file (without extension). If the user attached a file like 'mydoc.md', pass 'mydoc' here. The API will use this for the download filename. If omitted, the API derives one from the markdown's first H1 heading.
filePathNoPath to markdown file (VS Code integration, alternative to content or fileContent)
pageSizeNoPage size for the document (default: A4)
fileContentNoBase64 encoded file content (alternative to content or filePath)
orientationNoPage orientation (default: Portrait)
outputFormatYesOutput format. 'docx', 'pdf', or 'html' return that single file; 'all' returns a ZIP with DOCX+PDF+HTML.
templateNameYesTemplate to use for conversion. Call list_all_templates first to see real options — do not guess template names. Some templates are built-in (e.g. 'Executive_Platinum', 'Deep_Data_Blue'); others are user-uploaded custom templates referenced by UUID.

Output Schema

ParametersJSON Schema
NameRequiredDescription
messageNoHuman-readable status message
successYesWhether the conversion succeeded
fileNameYesFilename of the downloadable document
expiresAtNoISO 8601 timestamp when the download URL expires
creditsUsedNoCredits debited for this conversion
downloadUrlYesSecure expiring download URL (valid for 60 minutes)
balanceAfterNoRemaining credit balance after this conversion
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only provide basic hints (readOnlyHint=false, etc.). The description adds significant behavioral context: output format details, page size/orientation defaults, response fields to surface, and error handling. This goes well beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with numbered points and bullet lists, making it easy to parse. Every sentence adds value: it is comprehensive yet not verbose, appropriately front-loaded with the main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters, 3 enums, and output schema, the description covers all crucial aspects: output format choices, filename behavior, error recovery, page settings, and response fields. It is fully sufficient for the agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions, so the bar is high. The description adds meaningful usage semantics: clarifying the ambiguous format case, recommending to ask for filename, and emphasizing template naming rules. While not drastically expanding schema info, it provides actionable guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The title and first sentence clearly state the tool's purpose: converting markdown to a professional document using an MDMagic template. It distinguishes itself from sibling tools (template listing, credit checking) by focusing on the core conversion action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when/when-not guidance: advises asking for format if ambiguous, specifies filename handling, and instructs to call list_all_templates on 'template not found' errors, explicitly prohibiting code fallback. These guidelines directly help the AI agent decide usage over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

estimate_conversion_costA
Read-onlyIdempotent
Inspect

Estimate credit cost for a conversion BEFORE running it. Returns word count, page calculation (300 words/page), and a credit breakdown by format and template type. Use this when the user asks 'how much will this cost?' or when you suspect a conversion might exceed their balance — convert_document refuses to run if credits are insufficient, so estimating first is friendlier.

ParametersJSON Schema
NameRequiredDescriptionDefault
contentYesMarkdown content to estimate credit cost for
pageSizeNoPage size for the document
orientationNoPage orientation
outputFormatYesOutput format(s): docx (DOCX only), pdf (DOCX+PDF), html (DOCX+HTML), all/all-formats (DOCX+PDF+HTML)
templateNameYesTemplate ID or name (UUID for custom templates, name for system templates)

Output Schema

ParametersJSON Schema
NameRequiredDescription
breakdownNoHuman-readable breakdown of how credits are calculated
pageCountNoEstimated page count (300 words/page)
wordCountNoWord count of the markdown content
totalCreditsYesTotal credits required for this conversion
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark it as read-only and idempotent. The description adds context about what it returns (word count, page calculation, credit breakdown) and why it's useful, without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states the core purpose, second provides usage context and return details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool complexity (5 params, output schema exists), the description fully explains purpose, output components, and integration with sibling tool behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. The description mentions output format and template type in the credit breakdown but adds no new parameter-level detail beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('estimate') and resource ('conversion cost'), and clearly distinguishes from the sibling tool 'convert_document' by emphasizing it is a pre-run estimation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises using when the user asks about cost or when balance is suspect, and references the behavior of 'convert_document' (refuses if insufficient credits) to recommend estimation first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_template_detailsA
Read-onlyIdempotent
Inspect

Show available variants (page sizes and orientations) for a specific template. All MDMagic templates support the full 5×2 matrix: A3, A4, Executive, US_Legal, US_Letter × Portrait/Landscape. Use this when the user asks 'does this template come in Legal Landscape?' or 'what sizes are available?' — confirms the variant before convert_document runs.

ParametersJSON Schema
NameRequiredDescriptionDefault
templateNameYesTemplate ID or name (e.g. Executive_Platinum, or a UUID for custom templates)

Output Schema

ParametersJSON Schema
NameRequiredDescription
templateYes
pageSizesYesSupported page sizes
orientationsYesSupported orientations
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and safety hints. The description adds the concrete behavior of displaying the 5x2 matrix and confirms it's a read operation without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently cover purpose, usage guidance, and expected behavior with no redundancy. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (1 param), full schema coverage, and output schema present, the description provides complete contextual guidance for selecting and using this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for templateName. The description doesn't add significant meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool shows available variants (page sizes and orientations) for a specific template, using the verb 'Show' and specifying the resource. It distinguishes from sibling tools like convert_document by focusing on variant confirmation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides usage context: when users ask about available sizes/orientations, and clarifies it confirms the variant before conversion. It also notes the full 5x2 matrix expectation, guiding appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_all_templatesA
Read-onlyIdempotent
Inspect

List all 15 built-in MDMagic templates plus any custom templates the user has uploaded.

CALL THIS PROACTIVELY when:

  • The user mentions a template by name (verify it exists before convert_document)

  • The user asks 'what templates are available' or similar

  • A previous convert_document call returned 'template not found'

  • The user describes the look they want without naming a template (so you can suggest a real one)

Returns: name, description, type (built-in vs custom), and category. Categories are: Business (5 templates), Creative (6), Professional (2), Technical (2). Use the optional category filter to narrow recommendations (e.g. 'for legal documents' → category: 'Professional').

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoOptional filter — return only built-in templates in this category. Custom templates are always included regardless. Categories: Business (executive/financial), Creative (designer/artistic/novelty), Professional (legal), Technical (code/data documentation).
includeDetailsNoInclude template details like available page sizes and orientations (default: false)

Output Schema

ParametersJSON Schema
NameRequiredDescription
templatesYesAll matching templates
customCountNoNumber of custom templates returned
builtinCountNoNumber of built-in templates returned
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds that custom templates are always included regardless of category filter, clarifying output behavior beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with core statement upfront, bulleted usage guidelines, and return details. Slightly wordy but clear and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Completely covers what the tool returns, categories, and usage scenarios. No gaps given the optional parameters and existing output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds value by explaining that custom templates are always included when category filter is used, and that includeDetails adds page sizes/orientations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all built-in and custom templates, distinguishing it from sibling tools like list_builtin_templates and list_custom_templates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists four scenarios when to call proactively, including after template not found errors, and suggests using category filter for targeted recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_builtin_templatesA
Read-onlyIdempotent
Inspect

List the 15 built-in MDMagic templates, grouped by category. Same as list_all_templates but excludes the user's custom uploads. Use this when the user asks specifically about MDMagic's bundled templates rather than their personal ones.

Categories available: Business (5), Creative (6), Professional (2), Technical (2).

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoOptional filter — return only templates in this category.
includeDetailsNoInclude template details like available page sizes and orientations (default: false)

Output Schema

ParametersJSON Schema
NameRequiredDescription
countNoNumber of templates returned
templatesYesMatching built-in templates
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint, idempotentHint, and destructiveHint=false, covering the main behavioral traits. The description adds the specific count of 15 templates and the grouping structure, which provides useful context but is not essential given the annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the core action and count. Every sentence adds value: the first states purpose, the second clarifies usage context. No redundancy or unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (omitted here but indicated), the description does not need to detail return values. It covers the tool's purpose, usage context, and structural grouping, making it sufficiently complete for an agent to select and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters have schema descriptions with 100% coverage. The description does not add additional semantic meaning beyond the schema; it mentions categories and counts but not parameter-specific usage details. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists 15 built-in MDMagic templates grouped by category. It distinguishes from the sibling 'list_all_templates' by explicitly noting it excludes user custom uploads, providing a specific verb-resource pair and scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Use this when the user asks specifically about MDMagic's bundled templates rather than their personal ones.' It also names the alternative 'list_all_templates' for the complementary case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_custom_templatesA
Read-onlyIdempotent
Inspect

List only the user's custom-uploaded Word templates. Use this when the user asks about their own templates ('show me my templates', 'do I have a letterhead?'). Custom templates are referenced by UUID, not name, when calling convert_document.

ParametersJSON Schema
NameRequiredDescriptionDefault
includeDetailsNoInclude template details like available page sizes and orientations (default: false)

Output Schema

ParametersJSON Schema
NameRequiredDescription
countNoNumber of custom templates returned
templatesYesUser's custom templates
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, destructiveHint. The description adds no behavioral traits beyond stating it lists custom templates and the UUID reference for another tool does not describe this tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: first states purpose, second provides usage guidance and cross-tool info. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given rich annotations, complete schema coverage, and an output schema, the description adequately completes the tool's context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description does not add meaning beyond the schema's description of the includeDetails parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists only user's custom-uploaded Word templates, specifying the verb 'list' and the resource. It distinctively says 'only' compared to siblings like list_all_templates and list_builtin_templates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit examples of when to use ('show me my templates', 'do I have a letterhead?') and provides cross-tool context about referencing custom templates by UUID when calling convert_document.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_templateA
Read-onlyIdempotent
Inspect

Suggest the best built-in template(s) for a described purpose. Use this when the user describes WHAT the document is (e.g. 'Q4 board pack', 'API reference', 'wedding invitation', 'legal contract') without naming a template. Returns ranked recommendations with rationale.

Why this exists: AI assistants often guess template names that don't exist. This tool maps purpose → real template names from MDMagic's catalog, so convert_document doesn't fail with 'template not found'.

ParametersJSON Schema
NameRequiredDescriptionDefault
topNNoHow many recommendations to return (1-5, default 3)
purposeYesFree-text description of the document's purpose. Examples: 'Q4 board pack for investors', 'restaurant menu', 'developer API documentation', 'wedding invitation'.

Output Schema

ParametersJSON Schema
NameRequiredDescription
purposeNoEchoes back the purpose that was matched
rationaleNoWhy these templates were picked
recommendationsYesRanked list of template IDs to pass to convert_document
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds context about mapping purpose to real template names from MDMagic's catalog, preventing conversion errors, and returning ranked recommendations. This goes beyond annotations by explaining the tool's role in the conversion pipeline.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three paragraphs, well-structured with front-loaded core information. The second paragraph adds helpful context but could be integrated more concisely. Overall efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists, the description need not explain return values. It covers purpose, usage, rationale, and parameter context sufficiently. For a recommendation tool with 100% schema coverage, this is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description does not add extra meaning beyond the schema's parameter descriptions. The purpose parameter includes examples in the schema, and topN has default and range. With full schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool suggests the best built-in template(s) for a described purpose, using specific verbs and resource (purpose mapping). It distinguishes from sibling tools like convert_document and list_all_templates by specifying when to use it (user describes document purpose without naming a template).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this when the user describes WHAT the document is... without naming a template.' It also explains why it exists (to avoid guessing non-existent template names) and how it prevents failures in convert_document, providing clear when-to-use and why-not-alternatives guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

show_default_settingsA
Read-onlyIdempotent
Inspect

Show the user's default paper size and orientation preferences (set on their account page). Useful when the user hasn't specified pageSize/orientation explicitly — call this to honor their defaults instead of using A4/Portrait blindly.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
default_page_sizeYesUser's preferred page size
default_orientationYesUser's preferred page orientation
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior. The description adds value by specifying that defaults are set on the account page and that the tool retrieves those preferences. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exceptionally concise, with two sentences that cover purpose and usage guidance. No unnecessary words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters, rich annotations, and an existing output schema, the description is fully complete. It explains the tool's role and when to use it without needing to detail return values (handled by output schema).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so schema coverage is 100%. The baseline is 4, and the description adds no param information since none exist. This is appropriate and consistent with the zero-parameter design.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool shows the user's default paper size and orientation preferences. It identifies the specific resource (user's defaults) and the action (show). The additional context about honoring defaults distinguishes it from sibling tools like convert_document.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises using this tool when the user hasn't specified pageSize/orientation, to honor their defaults instead of blindly using A4/Portrait. This provides clear when-to-use guidance, though it doesn't mention explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_markdownA
Read-onlyIdempotent
Inspect

Pre-flight markdown validation BEFORE conversion. Catches malformed tables (mismatched pipes), unclosed code fences, broken task lists, and unsupported syntax. Returns a green/amber/red status plus the detected markdown features.

CALL THIS PROACTIVELY when:

  • The user is about to convert a long document (>5 pages) — validating first is cheap; running a doomed conversion costs credits

  • The user reports a previous conversion produced broken output

  • You generated the markdown yourself and want to verify it's clean before spending credits

Returns: status (green=safe, amber=minor issues, red=will likely break), detected features (tables, code blocks, task lists, math), and a human-readable message.

ParametersJSON Schema
NameRequiredDescriptionDefault
contentYesMarkdown content to validate
filenameNoOptional filename label for the response (defaults to 'content.md')

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYesValidation verdict
messageYesHuman-readable explanation of any issues
filenameNoFilename label echoed back
inputFormatNoDetected markdown flavour (e.g. gfm, commonmark)
detectedFeaturesNoMap of markdown features found in the content
additionalPandocFlagsNoPandoc flags that will be applied
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds valuable context about the return status (green/amber/red) and detected features, enhancing transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections and bullet points, but slightly verbose with multiple paragraphs; could be more concise without losing information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown but indicated), the description covers return values, usage guidance, and validation scope comprehensively, leaving no gaps for a tool of this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description only restates that 'content' is markdown content and 'filename' is optional, adding no new semantic detail beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Pre-flight markdown validation BEFORE conversion' with specific examples of what it catches (malformed tables, unclosed code fences), clearly distinguishing it from sibling conversion tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides three clear proactive calling scenarios (long documents, previous conversion issues, self-generated markdown), but lacks explicit 'when not to use' or alternative tool references.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources