Skip to main content
Glama

Server Details

Generate, render, and host Slidev presentations from markdown

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
joelbarmettlerUZH/slidev-mcp
GitHub Stars
2

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 8 of 8 tools scored. Lowest: 3.8/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no ambiguity. Tools like list_themes (for choosing), browse_themes (for visual browsing), and get_theme (for documentation) serve complementary but non-overlapping functions. Similarly, render_slides (for creation), export_slides (for PDF output), and screenshot_slides (for visual review) target different stages of the presentation workflow, with clear boundaries reinforced by their descriptions.

Naming Consistency5/5

All tools follow a consistent verb_noun naming pattern (e.g., list_themes, get_theme, render_slides). The verbs are descriptive and appropriate for their actions (list, get, render, export, browse, screenshot), and the nouns clearly indicate the target resources (themes, slides, guide). There are no deviations in style or convention across the set.

Tool Count5/5

With 8 tools, the count is well-scoped for the Slidev presentation domain. Each tool earns its place by covering distinct aspects of the workflow: theme selection (list_themes, browse_themes, get_theme), slide creation and rendering (render_slides, get_slidev_guide), output and review (export_slides, screenshot_slides), and session management (list_session_slides). This provides comprehensive coverage without bloat.

Completeness5/5

The tool set offers complete lifecycle coverage for creating and managing Slidev presentations. It includes theme discovery (list_themes, browse_themes), theme documentation (get_theme), slide creation guidance (get_slidev_guide), rendering (render_slides), output formats (export_slides, screenshot_slides), and session tracking (list_session_slides). There are no obvious gaps; agents can follow a coherent workflow from start to finish without dead ends.

Available Tools

8 tools
browse_themesBrowse ThemesA
Read-only
Inspect

Show the user a visual theme gallery with preview images.

ONLY call this when the user explicitly asks to SEE or BROWSE themes visually (e.g. "show me the themes", "what do they look like", "let me pick a theme"). This renders an interactive gallery in the user's UI.

To show a filtered subset (e.g. only dark themes), first call list_themes to identify matching themes, then pass their names here.

Do NOT call this to decide which theme to use yourself — use list_themes for that instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
themesNoOptional list of theme names to show (e.g. ['dracula', 'neocarbon', 'vibe']). If omitted, all themes are shown. Use list_themes first to find matching themes, then pass the filtered names here.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm readOnlyHint=true, while description adds crucial behavioral context that it 'renders an interactive gallery in the user's UI' (side effect disclosure) and explains the dependency on list_themes for filtering workflows.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with clear paragraph breaks separating purpose, usage triggers, workflow instructions, and exclusions. Every sentence provides distinct value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately complete for a UI-rendering tool with no output schema; covers the rendering behavior and interaction pattern. Minor gap: could mention error handling for invalid theme names, but sufficient for correct agent operation given the constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% providing complete technical documentation, but description adds valuable workflow semantics explaining how to populate the parameter ('first call list_themes... then pass their names here'), which guides the agent in correct invocation sequences.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Show the user a visual theme gallery with preview images') and clearly distinguishes from sibling list_themes by emphasizing the visual/interactive nature versus programmatic filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use criteria with concrete examples ('show me the themes', 'what do they look like'), explicit exclusions ('Do NOT call this to decide which theme to use yourself'), and named alternative (list_themes). Also documents the two-step workflow for filtering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

export_slidesExport Slides as PDFAInspect

Export a presentation as a downloadable PDF.

The presentation must have been created in the current session. Returns a URL to download the PDF.

ParametersJSON Schema
NameRequiredDescriptionDefault
uuidYesUUID of the presentation to export. Must be from the current session.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false (mutation/side effects). The description adds valuable behavioral context: it generates a PDF and 'Returns a URL to download the PDF' (filling the gap of no output schema). It also reinforces the session-scoping limitation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. Main action front-loaded ('Export a presentation...'), constraint in the middle, return value last. Every sentence provides essential information not found in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter, 100% schema coverage, and absence of output schema, the description adequately covers the return value (URL) and key behavioral constraints. No major gaps for this complexity level, though error conditions (e.g., invalid UUID) are not mentioned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema fully documents the uuid parameter including the session constraint. The description repeats the session constraint but doesn't add syntax details, examples, or semantics beyond what's in the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb (Export) + resource (presentation) + specific format (PDF). The 'downloadable PDF' phrasing distinguishes it from sibling render_slides (likely for display) and screenshot_slides (images), though it doesn't explicitly name these alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the critical constraint: 'The presentation must have been created in the current session.' This tells the agent when they can use this tool and implies they need to check session status first. Could be improved by mentioning the prerequisite of having created slides.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_slidev_guideGet Slidev GuideA
Read-only
Inspect

Get the Slidev syntax guide: how to write slides in markdown.

Returns the official Slidev syntax reference (frontmatter, slide separators, speaker notes, layouts, code blocks) plus built-in layout documentation and an example deck. Call this once to learn how to write Slidev presentations.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, confirming safe read operation. The description adds valuable context beyond annotations by detailing the specific content returned (official syntax reference, built-in layout documentation, example deck), though it omits potential rate limits or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with zero waste: front-loaded purpose ('Get the Slidev syntax guide'), middle content specification (what is returned), and terminal usage directive ('Call this once...'). Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple zero-parameter retrieval tool with readOnly annotations and an output schema, the description is complete. It covers purpose, return content summary, and invocation timing without needing to detail return structure (handled by output schema) or parameter validation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool has zero parameters with 100% schema coverage (empty object). Per calibration rules, zero parameters establishes a baseline of 4. The description appropriately does not invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool 'Get[s] the Slidev syntax guide' with specific details on what it returns (frontmatter, slide separators, layouts, etc.), distinguishing it from operational siblings like render_slides or export_slides which perform actions rather than retrieve documentation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance with 'Call this once to learn how to write Slidev presentations,' clearly indicating this is an onboarding/educational tool. However, it does not explicitly contrast with specific sibling tools or state when NOT to use it (e.g., 'do not use this to render slides').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_themeGet Theme DetailsA
Read-only
Inspect

Get full documentation for a specific theme: layouts, components, and examples.

Call this BEFORE render_slides to learn the theme's unique features. Each theme has different layouts, components, and frontmatter options. Use what you learn here to produce high-quality, theme-specific slides.

This is the primary tool for preparing to render slides. When the user specifies a theme, call this directly — no need to call browse_themes.

ParametersJSON Schema
NameRequiredDescriptionDefault
themeYesTheme name (e.g. 'seriph', 'neocarbon', 'field-manual'). Available themes: default, seriph, apple-basic, bricks, shibainu, academic, cobalt, dracula, eloc, field-manual, frankfurt, geist, neocarbon, neversink, nord, penguin, purplin, scholarly, swiss-ai-hub, the-unnamed, unicorn, vibe, vuetiful, zhozhoba.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, so the description appropriately focuses on workflow context rather than safety warnings. It adds valuable behavioral context about return value structure (documentation includes layouts/components/frontmatter options) and explains that theme capabilities vary, which helps the agent understand why this call is necessary.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Six sentences with zero waste. Front-loaded with the core action, followed by workflow timing, rationale, quality goal, primary role declaration, and sibling distinction. Every sentence earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (so return values need not be described), readOnly annotations (so safety is declared), and 100% parameter coverage, the description is complete. It covers purpose, sequencing, sibling relationships, and usage implications sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents the theme parameter including examples and the complete enum list. The description references 'specific theme' but does not add parameter syntax or format details beyond the schema, which is appropriate given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb+resource ('Get full documentation for a specific theme') and explicitly enumerates the scope (layouts, components, examples). It clearly distinguishes from sibling tools by contrasting with browse_themes ('no need to call browse_themes') and establishing its relationship to render_slides.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit sequencing ('Call this BEFORE render_slides'), clear context ('primary tool for preparing to render slides'), and direct alternative exclusion ('no need to call browse_themes'). The workflow guidance is concrete and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_session_slidesList Session SlidesA
Read-only
Inspect

List all slide presentations created in the current MCP session.

Returns URLs, themes, and timestamps for each presentation you've created.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
slidesNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only safety; description adds valuable behavioral context by disclosing return payload contents ('URLs, themes, and timestamps') and session persistence scope. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first defines action and scope, second defines return values. Front-loaded with critical operation verb and perfectly sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully adequate for a zero-parameter read operation. Output schema exists (reducing need for detailed return documentation), yet description helpfully summarizes return fields. Annotations cover safety profile; description covers functional scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero-parameter tool qualifies for baseline 4. Description appropriately acknowledges the empty parameter state by focusing entirely on operation scope and return values rather than inventing non-existent parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('List') + resource ('slide presentations') with clear scope ('created in the current MCP session'). Effectively distinguishes from sibling 'list_themes' by specifying 'slide presentations' rather than themes, and from 'render_slides' by focusing on existing created content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context through session-scoping ('current MCP session'), implying when to use it (to inspect created work). However, lacks explicit 'when-not-to-use' guidance or named sibling alternatives (e.g., contrasting with 'render_slides' for creation vs. listing).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_themesList ThemesA
Read-only
Inspect

Get a list of all available themes with style descriptions and recommendations.

Call this to decide which theme to use. Returns a guide organized by style (dark, academic, modern, playful, etc.) with "best for" recommendations.

After picking a theme, call get_theme with the theme name to read its full documentation (layouts, components, examples) before rendering.

This tool does NOT display anything to the user — it is for your own reference when choosing a theme.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true. Description adds valuable behavioral context: output is organized by style with 'best for' recommendations, and critically notes it 'does NOT display anything to the user — it is for your own reference.' This internal/external effect distinction is vital for agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each serving distinct purpose: (1) core function, (2) return format and usage trigger, (3) workflow continuation, (4) behavioral constraint. No redundancy. Front-loaded with actionable verb.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (mentioned in context signals) and zero parameters, the description provides sufficient context by previewing the output structure (guide organized by style) and explaining its place in the multi-step theme selection workflow without duplicating schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per scoring rules, baseline is 4 for parameter-less tools. Description appropriately does not invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get a list of all available themes') and scope ('with style descriptions and recommendations'). Explicitly distinguishes from sibling 'get_theme' by stating this is for deciding which theme to use, while get_theme is for reading full documentation after picking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow: 'Call this to decide which theme to use' (when), followed by 'After picking a theme, call get_theme' (next step/alternative). Also clarifies when NOT to use: 'This tool does NOT display anything to the user,' distinguishing it from render_slides/screenshot_slides.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

render_slidesRender Slidev PresentationAInspect

Render a Slidev presentation from markdown and return its hosted URL.

IMPORTANT: Before calling this tool, you MUST call get_theme with the theme name you plan to use. Each theme has unique layouts, components, and frontmatter options. Apply the theme's specific features in your markdown to produce high-quality slides that match the theme's design.

If the user has not specified a theme, call list_themes to pick one. If you are unfamiliar with Slidev markdown syntax, call get_slidev_guide.

Images must be remote URLs or base64-encoded inline. Local file paths are not supported.

ParametersJSON Schema
NameRequiredDescriptionDefault
uuidNoUUID of an existing presentation to update in-place (same URL). Omit to create a new presentation.
themeYesTheme name (e.g. 'seriph', 'default', 'neocarbon'). Read slidev://themes/installed for the full list of available themes.
markdownYesFull Slidev markdown including frontmatter. Use layouts, components, and features specific to the chosen theme.
color_schemaNoColor scheme: 'light' (default), 'dark', or 'auto'. Controls whether slides render in light or dark mode.light
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral constraints beyond annotations: image handling requirements (remote URLs/base64 only, no local paths) and the mandatory workflow dependency chain (get_theme first). Annotations indicate non-idempotent and non-readOnly; the description supports this with the update-in-place capability via UUID (per schema) but could clarify side effects like build duration or URL persistence.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure: purpose first, then mandatory prerequisites (IMPORTANT), then conditional branches (If...), then technical constraints. Every sentence conveys critical workflow information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a rendering tool: covers input preparation, prerequisites, output format (URL), and runtime constraints. Given no output schema, it adequately describes the return value. Minor gap: doesn't specify URL lifetime or build error handling, but the workflow coverage is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100%, the description adds valuable semantic context: the markdown parameter requires specific image encoding (remote/base64), and the theme parameter requires pre-validation via get_theme. This operational context exceeds the schema's static type definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening sentence clearly states the specific action ('Render'), resource ('Slidev presentation'), input ('from markdown'), and output ('hosted URL'). It effectively distinguishes from siblings like export_slides (export vs render) and screenshot_slides (capture vs create).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisites ('MUST call get_theme'), conditional alternatives ('If the user has not specified a theme, call list_themes'), and knowledge-gap alternatives ('If you are unfamiliar... call get_slidev_guide'). Also specifies when to use UUID vs omit it (implied by schema, workflow described).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

screenshot_slidesScreenshot SlidesA
Read-only
Inspect

Render all slides as PNG images and return them.

Use this to visually review a presentation. Returns one image per slide so you can see exactly what each slide looks like and give specific feedback.

ParametersJSON Schema
NameRequiredDescriptionDefault
uuidYesUUID of the presentation to screenshot.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint: true. The description adds valuable behavioral context not in the annotations: it specifies the return format (one image per slide), the file type (PNG), and the visual fidelity ('see exactly what each slide looks like'). This helps the agent understand the tool's output structure despite the lack of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally concise with two efficient sentences. The first declares the action and format; the second provides the use case and return structure. No redundant words or filler content. Information is front-loaded effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input and available annotations, the description adequately compensates for the missing output schema by explaining that it returns one image per slide. It successfully communicates the tool's function for visual review workflows, though explicit sibling comparisons would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'uuid' parameter, the baseline score applies. The description does not mention the parameter explicitly, but the schema is self-documenting. No additional parameter context (like format constraints or examples) is provided in the description, but none is needed given the complete schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool renders slides as PNG images for visual review. It specifies the output format (PNG), distinguishing it somewhat from siblings like 'render_slides' and 'export_slides'. However, it does not explicitly differentiate from these siblings or clarify why one would choose this over 'render_slides'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states 'Use this to visually review a presentation' and mentions giving 'specific feedback', providing a clear use case. However, it lacks explicit guidance on when NOT to use this tool or how it compares to alternatives like 'export_slides' (for file downloads) or 'render_slides'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.