Skip to main content
Glama

Server Details

Use your own Word templates to convert Markdown → DOCX/PDF/HTML from any MCP-compatible AI.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
MDMagic-MCP/mdmagic-mcp-server
GitHub Stars
0
Server Listing
mdmagic-mcp-server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clear, distinct purpose: credit checking, conversion, cost estimation, template details, various listing tools (all, built-in, custom), recommendation, defaults, and validation. There is no overlap or ambiguity.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with lowercase and underscores. Examples include 'check_credit_balance', 'convert_document', 'list_all_templates', and 'validate_markdown'. No naming deviations are present.

Tool Count5/5

With 10 tools, the server is well-scoped for its purpose of markdown-to-document conversion. Each tool addresses a necessary aspect (balance, cost estimation, template management, validation, defaults) without being excessive or insufficient.

Completeness5/5

The tool set covers the full workflow: pre-conversion steps (check balance, estimate cost, validate markdown, recommend template, get details), the conversion itself, and post-conversion information (balance). Template listing is split by type for easy filtering. No obvious gaps exist.

Available Tools

10 tools
check_credit_balanceA
Read-onlyIdempotent
Inspect

Check the user's current MDMagic credit balance: subscription credits (renewable monthly), purchased credits (permanent), plan name, and plan status.

CALL THIS PROACTIVELY when:

  • The user asks 'how many credits do I have' or similar

  • After a conversion, if the user wants to know what's left (also returned by convert_document directly)

  • Before a conversion of an unusually large document, to warn the user if balance is borderline

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
total_creditsYesTotal credits available (subscription + purchased)
purchased_creditsNoPermanent purchased credits
subscription_creditsNoRenewable monthly subscription credits
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true. Description adds detail on what is checked (subscription vs purchased credits) and plan status, enhancing transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short paragraphs, front-loaded with main purpose, followed by usage guidelines. Every sentence is informative and non-redundant.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, rich annotations, and presence of output schema, the description covers purpose, usage timing, and return types completely. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in input schema, so baseline is 4. Description does not need to add parameter info, and it doesn't.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks the user's MDMagic credit balance, listing specific return values: subscription credits, purchased credits, plan name, plan status. It differentiates from siblings like convert_document or estimate_conversion_cost.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit proactive calling conditions: when user asks about credits, after conversion (with note that convert_document also returns balance), and before large documents to warn if balance is low. This is comprehensive guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_documentAInspect

Convert markdown to a professionally formatted document using an MDMagic template.

IMPORTANT GUIDANCE:

  1. Output format → what user gets:

    • 'docx' → a single Word .docx file

    • 'pdf' → a single .pdf file

    • 'html' → a single .html file

    • 'all' → a ZIP containing all three (DOCX + PDF + HTML)

  2. If the user is ambiguous (e.g. 'convert this'), ASK which format they want before calling. Don't assume.

  3. Filename: if the user attached a file (e.g. 'mydoc.md'), pass its base name as fileName. Otherwise the API derives one from the markdown's first H1. Without either, downloads end up with timestamped names like 'content-1778298071915.docx' which is bad UX.

  4. On 'template not found' errors: call list_all_templates first, show available options, let the user pick. Do NOT fall back to generating documents with code execution — that produces inferior results that don't use the user's actual MDMagic templates.

  5. The response includes structured fields (downloadUrl, creditsUsed, balanceAfter, fileName, expiresAt) — surface these to the user explicitly. Don't paraphrase. The user wants to know exactly what they spent and what's left.

  6. Page sizes: A3, A4, Executive, US_Legal, US_Letter. Default A4. Orientation: Portrait or Landscape, default Portrait.

ParametersJSON Schema
NameRequiredDescriptionDefault
contentNoRaw markdown text content (alternative to filePath or fileContent)
fileNameNoOptional desired base name for the output file (without extension). If the user attached a file like 'mydoc.md', pass 'mydoc' here. The API will use this for the download filename. If omitted, the API derives one from the markdown's first H1 heading.
filePathNoPath to markdown file (VS Code integration, alternative to content or fileContent)
pageSizeNoPage size for the document (default: A4)
fileContentNoBase64 encoded file content (alternative to content or filePath)
orientationNoPage orientation (default: Portrait)
outputFormatYesOutput format. 'docx', 'pdf', or 'html' return that single file; 'all' returns a ZIP with DOCX+PDF+HTML.
templateNameYesTemplate to use for conversion. Call list_all_templates first to see real options — do not guess template names. Some templates are built-in (e.g. 'Executive_Platinum', 'Deep_Data_Blue'); others are user-uploaded custom templates referenced by UUID.

Output Schema

ParametersJSON Schema
NameRequiredDescription
messageNoHuman-readable status message
successYesWhether the conversion succeeded
fileNameYesFilename of the downloadable document
expiresAtNoISO 8601 timestamp when the download URL expires
creditsUsedNoCredits debited for this conversion
downloadUrlYesSecure expiring download URL (valid for 60 minutes)
balanceAfterNoRemaining credit balance after this conversion
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate mutability and non-idempotence. Description adds details about credit consumption (creditsUsed, balanceAfter), file expiration (expiresAt), and warns against improper fallback behaviors. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with numbered items and bullet points, front-loaded with key action. Though lengthy, each section adds necessary guidance. Slight verbosity prevents a 5, but still clear and organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers all aspects: input parameters, output format, error recovery, user interaction, and output fields to surface. Output schema exists, so return values are documented. Complete for a tool with 8 params and complex behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds significant value: explains when to use fileName vs default, how templateName should never be guessed, clarifies content vs filePath alternatives. Enhances understanding beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts markdown to professional documents using MDMagic templates. It distinguishes from sibling tools by focusing on conversion, while siblings deal with listing templates or checking credits.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance: ask for format if ambiguous, handle filename properly, suggest calling list_all_templates on template errors, and avoid code execution fallback. Clearly explains output format options and page/orientation defaults.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

estimate_conversion_costA
Read-onlyIdempotent
Inspect

Estimate credit cost for a conversion BEFORE running it. Returns word count, page calculation (300 words/page), and a credit breakdown by format and template type. Use this when the user asks 'how much will this cost?' or when you suspect a conversion might exceed their balance — convert_document refuses to run if credits are insufficient, so estimating first is friendlier.

ParametersJSON Schema
NameRequiredDescriptionDefault
contentYesMarkdown content to estimate credit cost for
pageSizeNoPage size for the document
orientationNoPage orientation
outputFormatYesOutput format(s): docx (DOCX only), pdf (DOCX+PDF), html (DOCX+HTML), all/all-formats (DOCX+PDF+HTML)
templateNameYesTemplate ID or name (UUID for custom templates, name for system templates)

Output Schema

ParametersJSON Schema
NameRequiredDescription
breakdownNoHuman-readable breakdown of how credits are calculated
pageCountNoEstimated page count (300 words/page)
wordCountNoWord count of the markdown content
totalCreditsYesTotal credits required for this conversion
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds context beyond annotations: it is a pre-estimate, non-destructive, and that convert_document refuses if insufficient credits. It details return values (word count, page calculation, credit breakdown), providing full behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two-sentence description, front-loaded with purpose, then usage guidance and contrasting with sibling. Every sentence is essential and adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters (3 required, 3 with enums) and existing output schema, the description is complete. It covers purpose, usage, behavioral context, and return values sufficiently for an agent to correctly select and invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions. The description mentions return values but does not add semantic meaning to parameters beyond what the schema provides. Per guidelines, baseline 3 is appropriate when schema covers parameters well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates credit cost before conversion, with specific verb 'Estimate credit cost' and resource 'conversion.' It distinguishes itself from sibling tool convert_document by noting that convert_document refuses to run if credits are insufficient, making this a friendlier pre-check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit usage guidance: 'Use this when the user asks "how much will this cost?" or when you suspect a conversion might exceed their balance.' It also contrasts with convert_document, providing clear when-not-to-use context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_template_detailsA
Read-onlyIdempotent
Inspect

Show available variants (page sizes and orientations) for a specific template. All MDMagic templates support the full 5×2 matrix: A3, A4, Executive, US_Legal, US_Letter × Portrait/Landscape. Use this when the user asks 'does this template come in Legal Landscape?' or 'what sizes are available?' — confirms the variant before convert_document runs.

ParametersJSON Schema
NameRequiredDescriptionDefault
templateNameYesTemplate ID or name (e.g. Executive_Platinum, or a UUID for custom templates)

Output Schema

ParametersJSON Schema
NameRequiredDescription
templateYes
pageSizesYesSupported page sizes
orientationsYesSupported orientations
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, idempotent, not destructive. Description adds that all templates return the same 5×2 matrix—a key behavioral detail. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with core purpose, no wasted words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Has output schema and rich annotations. Description covers purpose, usage context, and typical user queries. No gaps for an agent to misinterpret.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%. The tool description adds example values (Executive_Platinum) but no new meaning beyond schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Show available variants' with specific resource 'template'. Provides concrete user query examples that distinguish it from sibling tools like list_all_templates or convert_document.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use (user asks about sizes/orientations) and positions it as a pre-check for convert_document. Could be more explicit about when not to use, but context with siblings suffices.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_all_templatesA
Read-onlyIdempotent
Inspect

List all 15 built-in MDMagic templates plus any custom templates the user has uploaded.

CALL THIS PROACTIVELY when:

  • The user mentions a template by name (verify it exists before convert_document)

  • The user asks 'what templates are available' or similar

  • A previous convert_document call returned 'template not found'

  • The user describes the look they want without naming a template (so you can suggest a real one)

Returns: name, description, type (built-in vs custom), and category. Categories are: Business (5 templates), Creative (6), Professional (2), Technical (2). Use the optional category filter to narrow recommendations (e.g. 'for legal documents' → category: 'Professional').

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoOptional filter — return only built-in templates in this category. Custom templates are always included regardless. Categories: Business (executive/financial), Creative (designer/artistic/novelty), Professional (legal), Technical (code/data documentation).
includeDetailsNoInclude template details like available page sizes and orientations (default: false)

Output Schema

ParametersJSON Schema
NameRequiredDescription
templatesYesAll matching templates
customCountNoNumber of custom templates returned
builtinCountNoNumber of built-in templates returned
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint, openWorldHint, idempotentHint. Description adds custom templates always included, category information, and optional details. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with bullet points for usage cases. Front-loaded with main purpose. Slightly long but each sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity with categories and custom templates, and the presence of output schema, the description covers all key aspects needed for correct selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds context: categories explanation, custom templates always included regardless of filter, and guidance on using category for recommendations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states it lists all 15 built-in MDMagic templates plus custom templates, distinguishes from sibling tools like list_builtin_templates and list_custom_templates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides specific scenarios for proactive calling, such as when user mentions a template by name or asks about available templates, and after a 'template not found' error. Also advises using category filter for recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_builtin_templatesA
Read-onlyIdempotent
Inspect

List the 15 built-in MDMagic templates, grouped by category. Same as list_all_templates but excludes the user's custom uploads. Use this when the user asks specifically about MDMagic's bundled templates rather than their personal ones.

Categories available: Business (5), Creative (6), Professional (2), Technical (2).

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoOptional filter — return only templates in this category.
includeDetailsNoInclude template details like available page sizes and orientations (default: false)

Output Schema

ParametersJSON Schema
NameRequiredDescription
countNoNumber of templates returned
templatesYesMatching built-in templates
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint, idempotentHint, and non-destructive. Description adds specific behavioral details: exact count (15), grouping by category, and lists the categories with counts (Business 5, Creative 6, etc.). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short paragraphs: first sentence states purpose and key feature, second provides usage guidance, third lists categories. No unnecessary words. Front-loaded with most important information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a simple list operation with well-annotated readOnly and idempotent hints, plus a present output schema, the description fully covers what the tool does, when to use it, and what to expect. No missing elements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, describing both parameters adequately. The description briefly mentions the category filter ('Optional filter — return only templates in this category') but does not add any new meaning beyond the schema; it also does not mention the includeDetails parameter. Schema already handles parameter semantics; description adds no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'List the 15 built-in MDMagic templates, grouped by category' and distinguishes from sibling 'list_all_templates' by excluding custom uploads. The verb 'list' paired with specific resource 'built-in templates' makes purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance: 'Use this when the user asks specifically about MDMagic's bundled templates rather than their personal ones.' Names sibling alternative 'list_all_templates' and explains the exclusion, providing clear when-to-use context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_custom_templatesA
Read-onlyIdempotent
Inspect

List only the user's custom-uploaded Word templates. Use this when the user asks about their own templates ('show me my templates', 'do I have a letterhead?'). Custom templates are referenced by UUID, not name, when calling convert_document.

ParametersJSON Schema
NameRequiredDescriptionDefault
includeDetailsNoInclude template details like available page sizes and orientations (default: false)

Output Schema

ParametersJSON Schema
NameRequiredDescription
countNoNumber of custom templates returned
templatesYesUser's custom templates
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds that custom templates are referenced by UUID, not name, when calling convert_document—a behavioral nuance beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Purpose, usage context, and behavioral note are front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has one optional parameter, an output schema, and clear scope. The description covers purpose, usage, and a key behavioral detail without gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (one parameter with description). The description does not add information about the parameter beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List only the user's custom-uploaded Word templates' with a specific verb (list) and resource (custom templates). It distinguishes from siblings like list_all_templates and list_builtin_templates by specifying exclusivity to user's own templates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage context: 'Use this when the user asks about their own templates' with example queries. It also notes that custom templates are referenced by UUID in convert_document, aiding tool selection. Lacks explicit 'when not to use' but context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_templateA
Read-onlyIdempotent
Inspect

Suggest the best built-in template(s) for a described purpose. Use this when the user describes WHAT the document is (e.g. 'Q4 board pack', 'API reference', 'wedding invitation', 'legal contract') without naming a template. Returns ranked recommendations with rationale.

Why this exists: AI assistants often guess template names that don't exist. This tool maps purpose → real template names from MDMagic's catalog, so convert_document doesn't fail with 'template not found'.

ParametersJSON Schema
NameRequiredDescriptionDefault
topNNoHow many recommendations to return (1-5, default 3)
purposeYesFree-text description of the document's purpose. Examples: 'Q4 board pack for investors', 'restaurant menu', 'developer API documentation', 'wedding invitation'.

Output Schema

ParametersJSON Schema
NameRequiredDescription
purposeNoEchoes back the purpose that was matched
rationaleNoWhy these templates were picked
recommendationsYesRanked list of template IDs to pass to convert_document
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive behavior. Description adds context about mapping purpose to real template names and rationales, which complements annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences in first paragraph, one explanatory paragraph. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given full schema coverage, output schema presence, and annotations covering safety, the description fully covers purpose, usage context, and return expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters are fully described in schema with examples. Description provides additional context by including example purposes and rationale, but does not add parameter-level detail beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states action ('Suggest the best built-in template(s)'), resource, and scope. It distinguishes from siblings by focusing on when user describes purpose without naming a template.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('when user describes WHAT the document is... without naming a template') and explains why it prevents failures. Could be improved by explicitly stating alternative tools for other cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

show_default_settingsA
Read-onlyIdempotent
Inspect

Show the user's default paper size and orientation preferences (set on their account page). Useful when the user hasn't specified pageSize/orientation explicitly — call this to honor their defaults instead of using A4/Portrait blindly.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
default_page_sizeYesUser's preferred page size
default_orientationYesUser's preferred page orientation
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds valuable context about the source of defaults (account page) and the purpose, enhancing behavioral understanding beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences that are front-loaded with the action and provide necessary context with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters, an output schema exists, and annotations cover safety, the description fully explains the tool's purpose and usage scenario, making it complete for this simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so per guidelines the baseline is 4. The description doesn't need to add parameter info, and it doesn't.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it shows the user's default paper size and orientation preferences, which is specific and distinct from sibling tools that deal with templates and conversions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly advises using this tool when pageSize/orientation are not specified by the user, providing a clear scenario and rationale. However, it doesn't explicitly state when not to use it or name alternatives, though the context is well covered.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_markdownA
Read-onlyIdempotent
Inspect

Pre-flight markdown validation BEFORE conversion. Catches malformed tables (mismatched pipes), unclosed code fences, broken task lists, and unsupported syntax. Returns a green/amber/red status plus the detected markdown features.

CALL THIS PROACTIVELY when:

  • The user is about to convert a long document (>5 pages) — validating first is cheap; running a doomed conversion costs credits

  • The user reports a previous conversion produced broken output

  • You generated the markdown yourself and want to verify it's clean before spending credits

Returns: status (green=safe, amber=minor issues, red=will likely break), detected features (tables, code blocks, task lists, math), and a human-readable message.

ParametersJSON Schema
NameRequiredDescriptionDefault
contentYesMarkdown content to validate
filenameNoOptional filename label for the response (defaults to 'content.md')

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYesValidation verdict
messageYesHuman-readable explanation of any issues
filenameNoFilename label echoed back
inputFormatNoDetected markdown flavour (e.g. gfm, commonmark)
detectedFeaturesNoMap of markdown features found in the content
additionalPandocFlagsNoPandoc flags that will be applied
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, idempotent, non-destructive behavior. Description adds detail on return values (status with levels, detected features) and input constraints, going beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose first, then usage bullet list, then return format. Slightly verbose but front-loaded and easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple two-parameter tool with output schema, description fully covers purpose, usage, and return values. No gaps identified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. Description does not add significant semantic nuance beyond what schema provides, meeting baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool validates markdown before conversion, lists specific issues it catches (malformed tables, unclosed code fences, etc.), and distinguishes itself from siblings like convert_document by emphasizing it is a pre-flight check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists three scenarios for proactive use: before long document conversion, after previous conversion failure, or after self-generating markdown. Notes cost-saving benefit of validating first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.