MiOffice — AI-Powered Workspace Studio
Server Details
125+ browser tools for PDF, Image, Video, Audio, AI, Scanner. Files never leave your device.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.4/5 across 132 of 132 tools scored. Lowest: 2.1/5.
Most tools have distinct purposes indicated by their names, but there are some overlapping pairs like remove_background and remove_background_pro, and remove_object and inpaint_pro, which could cause confusion.
The naming convention is mostly consistent with 'mio_' prefix followed by category and descriptive name in snake_case, but there are minor inconsistencies such as 'mioffice_' prefix and varied verb structures.
With 132 tools, the server is excessively large for an MCP server, far exceeding the typical 3-15 tools, making it difficult for agents to navigate and select the right tool.
The tool set covers a wide range of workspace needs including AI, image, PDF, scanner, video, and audio editing, with only minor gaps like lack of AI image editing beyond background removal.
Available Tools
133 toolsmio_ai_audio_enhancerBInspect
AI Audio Enhancer — Enhance audio — speech denoising or music mastering, depending on input. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description discloses important behaviors: it runs on AI workers (Modal), credits vary, files auto-delete within 24 hours, and workspace credits work in packs. However, it does not state whether the operation is destructive, what the output format is, or note any required permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description starts with a clear purpose but then includes several sentences about pricing and workspace policies that are not directly about tool functionality. It could be more concise by focusing on essential usage and output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no input parameters and no output schema, the description covers the AI execution, credit cost, and file retention, but fails to explain what the user should expect as output (e.g., enhanced audio file) or how to specify the input file.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters (0 params), so the baseline is 4. The description does not add parameter information because none exist, but it does mention credits varying by model and file size, which might imply hidden parameters not in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'AI Audio Enhancer — Enhance audio quality — reduce noise, improve clarity', giving a specific verb and resource. It distinguishes itself from sibling tools like 'mio_audio_denoise' by implying an AI-powered process, but does not explicitly differentiate from similar tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide guidance on when to use this tool versus alternatives like 'mio_audio_denoise' or 'mio_audio_compressor'. It focuses on credits and auto-deletion policies, not on selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_cartoon_filterBInspect
AI Cartoon Filter — Transform photos into anime/cartoon style art using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure. It reveals that the tool dispatches to AI workers (Modal), credits vary by model and file size, files auto-delete within 24 hours (retention auditable), and that Day Pass and welcome credits exclude AI Studio. This goes beyond basic functionality and informs the agent of important constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise, front-loading the core purpose. It includes necessary context about credits, retention, and licensing in a structured way. Each sentence adds value, though the length is slightly more than required for a simple filter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of input schema and output schema, the description is incomplete. It fails to specify how to provide the input photo, what the output format will be, or how the tool interacts with the user's files. While it covers credits and retention, the missing input specification is a critical gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero properties, which is likely an error since the tool requires a photo input. The description mentions 'photos' but does not explain how to provide the image (e.g., via file ID or upload). Even with 0 parameters, the description fails to compensate for the missing input specification. The agent cannot infer the expected input from the description or schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool transforms photos into anime/cartoon style art using AI. It clearly identifies the action (transform), the resource (photos), and the target style (anime/cartoon). This distinguishes it from sibling AI tools like image generation or face enhancement.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for cartoonifying photos but provides no explicit guidance on when to use this tool versus alternatives. It mentions credits and file deletion but does not compare to other AI tools or state prerequisites. The purpose is clear but without situational or comparative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_clip_makerAInspect
AI Clip Maker — Extract the best short clips from long videos using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description discloses important behaviors: AI Studio run dispatch, credit usage variation, file auto-deletion within 24 hours, and credit pack workspace unlocking. It does not detail the entire processing flow but covers key behavioral aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main purpose and each sentence adds relevant context. It is slightly verbose but still concise for the amount of information conveyed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters or output schema, the description covers the tool's purpose, operational mechanism, credit implications, and data retention policy, making it fully adequate for an agent to understand and invoke the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description cannot add parameter meaning beyond it. Per guidelines, a baseline of 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Extract the best short clips from long videos using AI', clearly identifying the verb (extract) and resource (long videos). It distinguishes itself from sibling AI tools that perform different tasks like audio enhancement or image generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description provides operational details (credits, file auto-delete, workspace unlocking), it does not explicitly state when to use this tool versus alternatives, nor does it mention prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_document_summarizerAInspect
AI Document Summarizer — Summarize long documents into key points using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses dispatch to AI workers (Modal), variable credits, exclusion of Day Pass, 24-hour auto-deletion, and retention auditability. However, it omits details about output format and input mechanism.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose and includes necessary context without excessive verbosity. Some sentences about credit workspaces could be trimmed, but overall it's efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (AI model, variable cost) and no output schema, the description covers some behavioral aspects (auto-delete, credits) but fails to explain input acquisition and output format. It is adequate but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters, the description should explain how to provide the document. Instead, it only mentions 'long documents' without specifying input method (e.g., file upload or text). This is a critical gap for a tool that clearly requires an input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Summarize long documents into key points using AI'. It distinguishes itself from sibling tools like 'mio_ai_document_translator' by specifying the core action of summarization.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about AI Studio, credits, and auto-deletion, but does not explicitly state when to use this tool versus alternatives. No sibling tools are directly compared, leaving the agent to infer usage without clear boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_document_translatorCInspect
AI Document Translator — Translate text between 16 languages using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses file auto-delete and retention, but fails to clarify how input is provided (no parameters in schema) or what the output looks like. The implication of file input contradicts the empty schema, creating a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively long (several sentences) and includes important operational details (retention, pricing). However, it lacks essential information on how to call the tool, making it not optimally concise for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the schema has no parameters and no output schema, the description should cover input specification (how to provide text) and output format. It does not; it focuses on pricing and retention, leaving major gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not add parameter meaning. However, the description mentions 'model and file size,' implying parameters exist, which contradicts the schema. With 100% schema coverage (trivially), baseline is 3, but the misleading implication lowers the score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Translate text between 16 languages using AI.' This is a specific verb (translate) and resource (text). However, it does not explicitly distinguish from sibling tools like mio_ai_video_translator (video) or mio_ai_transcriber (speech to text).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about credits, file retention, and workspaces, but does not give guidance on when to use this tool versus alternatives or when not to use it. No alternative tools are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_face_enhancerAInspect
AI Face Enhancer — Enhance and restore faces in photos using AI — sharpen details, fix blur, improve quality. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that it dispatches to AI workers, credits vary, files auto-delete within 24 hours, and credit packs work across workspaces. This provides useful behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat verbose, including pricing and credit details that may not be essential for agent invocation. It could be more concise by focusing on core functionality and key usage notes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the tool's purpose, behavior, and operational constraints, but it does not specify input requirements (e.g., file type or how to provide the image). Given no parameters and no output schema, this is a gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description cannot add parameter meaning. Per guidelines, baseline is 4 for 0 params. The description does not need to elaborate on parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Enhance and restore faces in photos using AI — sharpen details, fix blur, improve quality.' It distinguishes from sibling tools like face_swap and photo_restorer by focusing on enhancement.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides operational details (credits, auto-deletion, AI Studio) but does not explicitly guide when to use this tool versus alternatives. It lacks context for selection among similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_face_swapAInspect
Face Swap — Swap faces between photos using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses key behavioral traits: it dispatches to AI workers, credits vary, files auto-delete in 24 hours, and credit packs are unified. It does not mention potential destructive actions, but the disclosed details are sufficient for a simple tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single, well-structured paragraph that starts with the core purpose. It is concise but includes necessary operational details; could be slightly tighter but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and zero parameters, the description covers purpose and key behaviors. However, it lacks guidance on how to supply photos (e.g., via file upload or URL) and what the output format is, leaving some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has no parameters (100% coverage), so the baseline is 3. The description adds value by explaining credit and retention behavior, but does not elaborate on parameter details since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Face Swap — Swap faces between photos using AI.' It specifies the verb (swap), resource (faces in photos), and distinguishes from sibling AI tools like face enhancer or image generator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the underlying AI Studio run, credit usage, and file retention, but does not provide explicit guidance on when to use face swap versus other similar tools or mention prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_headshot_generatorAInspect
AI Headshot Generator — Generate professional headshots from any photo. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that runs are dispatched to AI workers (Modal), that files auto-delete within 24 hours, and that retention is auditable. This is beyond basic, but it doesn't detail the exact input method or output format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, front-loaded with purpose, then supporting details. Every sentence provides distinct value: credit model, retention, workspace policy, and pricing reference. No redundant or extraneous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description covers purpose, credits, retention, and workspace info. However, it misses how to provide the input photo and what the output looks like (return value). This is a notable gap for a generation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema description coverage is 100% (no params to document). However, the description implies the tool accepts a photo ('from any photo') but does not define how to pass it (e.g., file upload or URL). This omission could confuse agents; description fails to add meaning beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Generate professional headshots from any photo.' It uses a specific verb ('Generate') and resource ('professional headshots'), distinguishing it from siblings like face_enhancer or cartoon_filter which do different transformations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage context: credits vary, Day Pass/welcome credits excluded, auto-delete within 24 hours, and workspace unlock policy. It does not explicitly state when not to use this tool or suggest alternatives, but the context is clear enough for informed selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_image_generatorAInspect
AI Image Generator — Generate images from text descriptions using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It discloses credit variability, exclusion of day pass/welcome credits, 24-hour auto-delete with auditability, and workspace unlocking policy. This is substantial behavioral context beyond basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat long but each sentence adds necessary value: purpose, platform, credit details, retention policy, workspace unlocking, and pricing reference. It is front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite explaining credits and retention, the description fails to address how the text prompt is provided when there are zero parameters. This is a critical gap for an image generation tool. Additionally, no output schema or return value description is given, leaving the agent uncertain about the result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no properties (0 parameters). According to guidelines, 0 parameters yields a baseline of 4. The description adds no parameter info because there is none to add.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Generate images from text descriptions using AI.' It is a specific verb+resource combination, and it distinguishes itself from sibling tools like logo generator and headshot generator by being a general image generator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about credits, auto-deletion, and pricing, but does not explicitly state when to use this tool vs alternatives (e.g., logo generator). Usage is implied but not guided with when-not or alternative suggestions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_inpaint_proCInspect
AI Eraser Pro — Remove objects, watermarks, and unwanted elements with AI inpainting. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It mentions that the tool dispatches to AI workers (Modal) and that files auto-delete in 24 hours, but fails to disclose critical behavioral traits such as whether the operation is synchronous or asynchronous, whether it modifies original files, or what permissions are required.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description contains several sentences about credits, workspace policies, and file retention, which are somewhat tangential to the core function. While informative, it could be more tightly focused on what the tool does and how to use it. It is not overly long but could be restructured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description should explain the tool's input mechanism (missing) and output format. It mentions file auto-deletion but not what the agent receives after invocation (e.g., a URL or file ID). In the context of many sibling tools, it fails to help the agent distinguish when to use inpainting vs. other removal methods.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, yet an inpainting tool typically requires an image and a mask. The description does not explain how the AI determines what to erase, leaving a critical gap. Schema coverage is trivial (100% of no params), so the description fails to add meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool uses AI inpainting to remove objects, watermarks, and unwanted elements, which establishes the core purpose. However, it does not differentiate from sibling tools like 'mio_ai_remove_object' or 'mio_ai_remove_background_pro', which may perform similar functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives. It mentions credit costs and file retention policies but does not help the agent decide between this and other removal or editing tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_logo_generatorAInspect
AI Logo Generator — Generate professional logos from text descriptions. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden and discloses important behaviors: dispatches to AI workers, variable credits, exclusion of Day Pass/welcome credits, 24-hour auto-deletion, auditable retention, and workspace pack details. It adds value beyond what annotations (none) provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4 sentences) and front-loaded with the tool's purpose. Every sentence adds necessary context (credits, retention, workspace). No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite covering credits and retention, the description fails to explain how the text description is provided (since input schema has no parameters) and does not describe the output format (e.g., image type, resolution). For an AI generation tool with no output schema and no annotations, this leaves significant ambiguity for an agent invoking the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters (coverate 100%), so the description does not need to explain parameter meanings. It mentions 'from text descriptions', hinting at a prompt, but no formal param is defined. This is a baseline of 4 as no parameter info is missing from schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it generates professional logos from text descriptions, using a verb ('Generate') and a specific resource ('logos'). It distinguishes itself among siblings like 'mio_ai_image_generator' by specifying 'logo' generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about credits, auto-deletion, and workspace unlocking, but does not explicitly state when to use this tool versus alternatives like 'mio_ai_image_generator'. It implies usage for logo creation but lacks exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_melody_to_musicAInspect
AI Melody to Music — Upload a melody or hum a tune and AI creates a full music arrangement in your style. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses important behavioral traits: uses Modal workers, credits vary by model/file size, files auto-delete in 24 hours, and credit pack unlocks all workspaces. No annotations exist, so the description carries the full burden and does so effectively, though it omits output format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action, followed by relevant operational details. Every sentence adds value, though it could be slightly more concise; nonetheless, it is well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers functionality and operational aspects, but the empty input schema contradicts the implied need for a melody upload, and lack of output schema leaves return format unknown. This reduces completeness for an agent relying on both schema and description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, so description need not add parameter meaning. Baseline is 4 for no parameters; the description correctly implies user provides a melody via upload, but this is not reflected in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts a melody or hum into a full music arrangement, using the verb 'creates' and specifying the resource (melody to music). It differentiates from siblings like mio_ai_music_generator by focusing on melody input rather than text-based generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context about AI Studio run, credit consumption, file retention, and workspace unlocking. However, it does not explicitly compare with alternatives or state when not to use this tool versus siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_music_generatorAInspect
AI Music Generator — Generate royalty-free instrumental background tracks from text descriptions. No vocals — perfect for video, podcast, and ad backgrounds. For songs with vocals + lyrics, see AI Song Generator.. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description discloses key behaviors: dispatches to AI workers, credit-based pricing, 24-hour auto-deletion, and cross-workspace credit unlocking. However, it does not mention permissions or output characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single block of text rather than structured bullet points; while each sentence adds value, formatting could be improved for agent readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks explanation of the text input mechanism despite claiming 'text descriptions', and does not specify output format (e.g., audio file type). Given zero parameters, more detail on how to provide input is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so description cannot add parameter info. Baseline score of 4 applies as per rubric for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Generate' and resource 'original music from text descriptions', clearly distinguishing it from siblings like 'mio_ai_audio_enhancer' (enhancement) and 'mio_ai_melody_to_music' (melody input).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context about credit costs, AI Studio requirement, and auto-deletion, but does not explicitly compare to alternatives or state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_photo_colorizerAInspect
AI Photo Colorizer — Colorize black and white photos using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behaviors: dispatches to AI workers, credits vary by model and file size, files auto-delete within 24 hours with auditable retention. This is comprehensive for a simple tool with no parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is about 6 sentences, front-loading the main purpose. It is reasonably concise but includes some billing and retention details that could be considered secondary. Still, every sentence adds value, and it fits nicely without excessive verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers purpose, credit model, and retention, it lacks any mention of the output format or what the user gets (presumably a colorized image). Given no output schema, the description should at least note that the result is a downloadable file. This gap reduces completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the schema description coverage is 100% trivially. The description adds no parameter info, which is acceptable since there are none. Baseline of 4 is appropriate for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: colorize black and white photos using AI. It specifies it is an AI Studio run dispatched to AI workers (Modal), distinguishing it from other AI photo tools like face_enhancer or photo_restorer. The verb 'colorize' and resource 'photos' are well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about credits, file retention, and pricing, but does not explicitly state when to use this tool versus alternatives like 'mio_ai_photo_restorer' or 'mio_ai_face_enhancer'. It gives some context but no direct comparisons or contraindications.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_photo_restorerBInspect
AI Photo Restorer — Restore old, damaged, or low-quality photos using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavioral traits. It covers execution model (Modal), credits, and file deletion, but critically omits how to provide the input photo (no parameters in schema) and what the output is. This omission leaves ambiguity about tool invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and then provides supplementary details. It is slightly verbose with pricing and workspace policy, but overall structured and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no parameters, the description lacks essential context: how to supply the photo (e.g., file upload reference) and what the output is (restored image). It focuses on credits/data retention but ignores input/output mechanics, making it incomplete for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters (100% schema coverage). The description adds no parameter-specific information beyond the schema, meeting the baseline of 3 for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Restore old, damaged, or low-quality photos using AI.' This specific verb (restore) and resource (photos) effectively distinguishes it from sibling tools like colorization or background removal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for photo restoration tasks but does not explicitly guide when to use this vs. other AI photo tools (e.g., colorizer, face enhancer). No when-not or alternative recommendations are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_remove_backgroundCInspect
Remove Background — Remove image background using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description bears full burden. It discloses that it dispatches to AI workers (Modal), credits vary, and files auto-delete within 24 hours with auditability. However, it does not state that the tool is destructive (modifies the image) or the side effects on original files.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that front-loads the purpose. While it includes multiple pieces of information (credits, retention, pricing link), it remains relatively concise without extraneous details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and a sibling with 'pro' variant, the description should explain how to invoke the tool and what the output looks like. It does not cover input method, output format, or key differentiators, leaving the agent underinformed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so baseline would be 4. However, the description fails to clarify how the agent provides the image to be processed. It does not explain the context or input mechanism, leaving a significant gap for correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool removes image backgrounds using AI. However, it does not differentiate from the sibling 'mio_ai_remove_background_pro', which likely offers additional features. The purpose is specific but misses sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions credit usage and file retention policies, but does not provide when to use this tool versus alternatives (e.g., the pro version). No explicit context for usage or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_remove_background_proBInspect
AI Background Remover Pro — Remove backgrounds with AI — superior quality for complex edges, hair, and transparency. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses important behavioral traits: AI Studio run via Modal, credit cost variation, auto-deletion of files within 24 hours, auditability, and workspace credit policy. No contradictions or omissions in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but includes extraneous marketing details (e.g., pricing URL) and could be streamlined. It is structured as a coherent paragraph but not front-loaded with the most essential info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (AI-powered, credit-based, file retention), the description covers costs and data handling but completely omits the input mechanism, making it incomplete. No output schema exists, but that's acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, yet the description fails to explain how the tool receives an image to process. This is a critical omission—users cannot understand how to invoke the tool correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it removes backgrounds with AI and highlights superior quality for complex edges, hair, and transparency. However, it does not differentiate from the sibling 'mio_ai_remove_background' nor specify how input is provided, slightly reducing clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use for complex edges and transparency but offers no explicit guidance on when to use this pro version versus the non-pro sibling. No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_remove_objectAInspect
Remove Object — Remove watermarks, objects, or unwanted elements from images. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses asynchronous dispatch, credit cost variation, 24-hour file retention, and credit workspace unlocking. Provides a link to pricing. Without annotations, the description carries full burden and does so well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose, but includes extra sentences on credits and retention that, while informative, add length. Good structure overall.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, credits, retention, and workspace unlocking. For a zero-param tool, it is largely complete, though it does not explicitly describe the output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, so no additional parameter information is needed. Baseline set to 4 per guidelines.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (remove) and resource (watermarks, objects, unwanted elements from images). It provides specific examples but does not differentiate from sibling tools like 'mio_ai_remove_background' or 'mio_ai_inpaint_pro'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It lacks context for agent selection among similar image removal tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_silence_removerAInspect
AI Silence Remover — Automatically remove silent gaps from videos and audio. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, description partially discloses behavior: dispatches to AI workers, auto-deletes after 24 hours, and credit requirements. However, it omits whether the tool modifies original files or creates new outputs, and lacks details on side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with core purpose, then covers credits and retention. Could be more concise by separating operational details, but overall well-structured with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Does not explain how to provide input or what the output is. With no parameters and no output schema, agent is left uncertain about invocation context and results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, so description adds no param info needed. Schema coverage is 100%, making baseline high. Description is sufficient for parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Automatically remove silent gaps from videos and audio' with specific verb and resource, distinguishing it from siblings like vocal remover or audio enhancer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description focuses on credits and retention rather than usage context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_song_generatorAInspect
AI Song Generator — Generate full songs with vocals + lyrics + instrumentation from text. Powered by MiOffice Song Engine.. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description sufficiently covers behavioral traits: it mentions AI Studio dispatch, variable credits, auto-deletion within 24 hours, and subscription model. It does not cover all edge cases (e.g., insufficient credits), but the disclosed behaviors are relevant and transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and then adds context on credits and policies. It is slightly verbose but each sentence earns its place. Could be more structured, but overall concise for the amount of information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, no output schema, and no annotations, the description provides adequate context: what the tool does, credit usage, auto-deletion, and pricing. However, it does not specify output format or style options, leaving some gaps. Still, it is reasonably complete for a zero-param tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, so the description cannot add parameter semantics beyond what schema provides. Baseline per rules is 4. The description implies text input is required but does not elaborate on how it is provided (e.g., via prompt context).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Generate full songs with vocals + lyrics + instrumentation from text,' using a specific verb and resource. It distinguishes from sibling tools like 'mio_ai_music_generator' which likely focuses on instrumental music only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about when to use (generating songs from text) and includes important notes on credits, auto-deletion, and workspace pricing. However, it does not explicitly contrast with alternatives such as 'mio_ai_music_generator' or 'mio_ai_melody_to_music', so it lacks explicit exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_talking_headAInspect
AI Talking Head — Animate a face photo with audio to create a talking video. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses AI workers, variable credits, auto-deletion, and pricing, but lacks details on output format, supported input formats, and any operational limits (e.g., max file size, audio length). Some transparency but gaps remain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Purpose is front-loaded in the first sentence. The description covers costs, retention, and pricing in a single paragraph. Minor redundancy about AI Studio and credit packs slightly reduces conciseness, but overall efficient for a tool with no parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of AI video generation and no output schema, the description is incomplete. It does not specify output format (e.g., MP4), required input formats (photo type, audio type), or size limits. The input mechanism (how to provide photo/audio) is not described, which is critical since no parameters exist.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0 parameters (100% coverage), so description adds necessary context that inputs include a face photo and audio. Since schema provides no parameter names or types, the description compensates adequately by stating the required inputs in natural language.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Animate a face photo with audio to create a talking video', which is a specific verb and resource. It distinguishes from sibling tools like face swap or audio enhancer by specifying the unique combination of inputs and output.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for talking head generation when you have a photo and audio, but does not explicitly compare with alternatives like text-to-video or voice cloner. It mentions credits and exclusions but no direct when-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_text_to_videoAInspect
AI Text to Video — Generate video from text descriptions using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses key behaviors: dispatches to AI workers (Modal), credits vary, auto-deletion within 24 hours, and retention auditability. It also notes that Day Pass and welcome credits exclude this tool. This provides good transparency about what happens during execution and constraints, though it doesn't specify exact output format or failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single coherent paragraph that front-loads the purpose ('AI Text to Video') and then provides essential supporting details. Every sentence adds value—credit policy, retention, workspace unlock—without redundancy. It could be slightly more structured (e.g., bullet points) but remains concise and focused.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description covers the main aspects: purpose, execution model, credit usage, and file retention. However, it omits details about the output (e.g., generated video format, size, or how to retrieve it) which would benefit an agent. The mention of 'auditable' retention is helpful but incomplete for a complete understanding of the tool's lifecycle.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema coverage is 100% trivially. Per guidelines, with 0 params the baseline is 4. The description does not add parameter-level detail because none exist, but it compensates by explaining the tool's behavior and constraints. No additional parameter semantics are needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it generates video from text using AI, with a specific verb ('Generate') and resource ('video from text descriptions'). It distinguishes from siblings like mio_ai_image_generator by focusing on video, though it doesn't explicitly compare to related tools like mio_ai_clip_maker. The additional context about credits and auto-deletion supports understanding but doesn't detract from purpose clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes some usage context (credits, auto-deletion, workspace unlocking) but lacks explicit guidance on when to use this tool versus alternatives. No when-not-to-use or comparison to siblings like mio_ai_clip_maker is provided. The credit policy details may confuse as they describe limitations rather than ideal usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_transcriberCInspect
AI Audio Transcriber — Convert speech to text with AI-powered transcription. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description details credit usage and file retention policies, which is useful. However, it fails to explain how the audio input is provided, especially since the input schema has no parameters, leaving a critical gap in behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose and provides essential logistical details in a structured manner. Slightly long but each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is incomplete for a zero-parameter tool. It doesn't explain how the agent should provide audio input, which is critical for correct invocation. No output schema either, leaving many context gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so schema coverage is 100%. The description doesn't add parameter-specific value, but baseline is 4 for zero parameters. The description's lack of clarity on how to provide audio reduces its additive value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts speech to text with AI transcription. However, it doesn't differentiate from sibling tools like mio_ai_video_subtitler which also transcribes speech, limiting distinctiveness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool over alternatives. It discusses credits and file deletion but not context for choosing transcription over other AI audio tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_upscale_proAInspect
AI Image Upscaler Pro — Upscale images to 4x resolution with AI — sharper details, no artifacts. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key behaviors: credit-based pricing, file auto-deletion within 24 hours, auditable retention, and credit pack details. No annotations exist, so the description carries full transparency burden and does so adequately. Could add more on failure modes or output format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose, but includes excessive detail about credit packs and pricing that could be condensed. Some sentences (e.g., 'All three credit-based workspaces unlock…') are tangential to the core function. Could be shorter while retaining essential info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fails to describe how to provide the input image, despite the tool likely requiring an image file. No output format or behavior after processing is explained. With no output schema, this gap is significant. Additional info on prerequisites or invocation context is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters with 100% coverage (empty). The description does not explain how to provide the image input, which is a critical gap for invocation. The agent cannot determine what to pass, making parameter semantics poor.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Upscale images to 4x resolution with AI', specifying verb (upscale), resource (images), and distinguishing features (4x, sharper details, no artifacts). Differentiates from sibling 'mio_image_upscale' by being 'Pro' and mentioning AI Studio.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context about AI Studio runs and credit costs, but does not explicitly compare to the basic upscale tool (mio_image_upscale) or define when to prefer this over alternatives. Lacks explicit when-to-use/when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_video_background_removerAInspect
AI Video Background Remover — Remove or replace video backgrounds using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It explains the execution model (AI workers), credit costs, file auto-deletion within 24 hours, and auditability. This adds operational context beyond basic 'removes background'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose, then provides essential details about credits, retention, and pricing. Each sentence adds value, though it could be slightly more concise by merging the credit and retention lines.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no parameters, the description covers purpose, cost, and data retention. However, it does not specify what the output is (e.g., file download URL) or how results are obtained, nor does it compare with sibling tools for image background removal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters in the input schema, so schema coverage is trivially 100%. The description does not add parameter details but mentions that credits vary by model and file size, hinting at implicit parameters. Since there are none, it is adequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it removes or replaces video backgrounds using AI. The title and first sentence make this explicit. It distinguishes from siblings like 'mio_ai_remove_background' (for images) by specifying 'video'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool over alternatives like 'mio_ai_remove_background' or 'mio_ai_remove_background_pro'. Usage is implied by the name and first sentence, but no direct comparison or exclusions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_video_enhancerAInspect
AI Video Enhancer — Upscale and enhance video quality using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries the full burden. It discloses that it dispatches to AI workers, credits vary, Day Pass does not include AI Studio, and files auto-delete within 24 hours. This is adequate behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is verbose, including pricing and workspace details that may not be essential for tool invocation. While front-loaded with purpose, it could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description covers purpose, credits, file retention, and access constraints. It is sufficiently complete for a no-parameter tool, though output format is not specified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so baseline 4 per guidelines. The description does not need to add parameter info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'AI Video Enhancer — Upscale and enhance video quality using AI', clearly stating the verb and resource. However, it does not differentiate from the sibling 'mio_ai_upscale_pro', which may have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides operational details like AI Studio run, credits, and file deletion but does not explicitly state when to use this tool vs alternatives (e.g., upscale_pro). No direct comparison with siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_video_subtitlerBInspect
AI Video Subtitler — Auto-generate subtitles for any video using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Given no annotations, the description provides useful behavioral traits: dispatches to AI workers (Modal), credits vary, files auto-delete in 24 hours, retention auditable, and credit pack covers all workspaces. This adds transparency beyond the bare minimum, though it lacks details on output format or retrieval.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main purpose and is informative. However, it mixes operational details (credits, retention) that may not be essential for every use case. Could be more concise by separating core functionality from administrative notes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (AI subtitle generation), zero parameters, and no output schema, the description is incomplete. It does not clarify how to specify a video, what the output format is, or how to retrieve results. It also fails to differentiate from the similar 'mio_video_auto_captions' tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema coverage, the baseline is 4. However, the description does not explain how the video input is provided or selected, which is critical for a subtitler tool. It adds administrative context but fails to clarify parameter meaning or usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the purpose: 'Auto-generate subtitles for any video using AI.' This is specific and actionable. However, it does not distinguish from the similar sibling tool 'mio_video_auto_captions', so clarity is slightly reduced.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It does not mention conditions, prerequisites, or when not to use it. The administrative details (credits, retention) are provided but do not inform usage decisions relative to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_video_translatorAInspect
AI Video Translator — Translate and dub videos into other languages using AI. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description bears full responsibility. It discloses important behaviors: AI Studio run dispatches to AI workers, credits vary, Day Pass/welcome credits exclude AI Studio, files auto-delete within 24 hours, and credit packs cover all workspaces. This goes beyond what structured fields provide, though it could describe output handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat verbose with multiple sentences covering credits, retention, and pricing. While front-loaded with a clear purpose, it includes details that could be condensed or moved to linked docs. Not excessively long, but not optimally concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite zero parameters and no output schema, the description omits crucial context: what is the input video (source, format, how to provide it) and what is the output (translated video, file format, download method). The tool's operation is ambiguous without these details, especially for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema_description_coverage is 100%. Per guidelines, baseline is 4. The description does not add parameter information, but that's appropriate since there are none to describe.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Translate and dub videos into other languages using AI.' It uses specific verbs and a distinct resource (videos), and it's clearly differentiated from sibling tools like mio_ai_video_subtitler (subtitles only) and mio_ai_video_enhancer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention conditions for use, prerequisites, or explicit when-not-to-use scenarios. With many sibling video tools, this omission makes selection harder for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_vocal_removerAInspect
AI Vocal Remover — Remove vocals from any song to create instrumentals or karaoke tracks. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully bears the burden of behavioral disclosure. It transparently covers credit consumption, retention policies, and workspace unlocking, giving the agent a clear understanding of consequences and constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is informative but somewhat verbose, covering multiple aspects in a single paragraph. It efficiently communicates essential information, though it could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no input parameters, no output schema, and no annotations, the description covers purpose, cost, retention, and workspace details fairly comprehensively. It lacks mention of output format but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to explain parameters. It meets the baseline for no-parameter tools by providing general context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Remove vocals from any song to create instrumentals or karaoke tracks.' This uses a specific verb and resource, and it distinguishes itself from sibling AI tools like audio enhancer or background remover.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context on when to use the tool (for vocal removal) and includes important usage details (credit costs, auto-deletion, workspace unlocking). However, it does not explicitly mention when not to use or list alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_voice_clonerBInspect
AI Voice Cloner — Clone any voice from a short audio sample — upload a 5-10 second recording and generate speech in that voice. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description must disclose behavior. It covers AI worker dispatch, credits, auto-deletion, but fails to explain the input mechanism (audio file upload) or output format. The lack of parameters in the schema contradicts the described workflow, reducing clarity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose but includes extra policy details (Day Pass, workspace unlocks) that could be omitted or placed in notes. It is somewhat verbose for a tool needing more clarity on inputs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and empty input schema, the description must provide complete context. It fails to specify how to provide the audio sample (no parameter in schema) and does not describe output format or behavior. The tool is incomplete in specification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters in the schema, baseline is 4, but the description implies a necessary input (audio file) not reflected in the schema. This contradiction reduces useful meaning. The description adds no parameter information beyond the schema, and the mismatch harms semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool clones voices from a short audio sample, with a specific verb ('clone') and resource ('voice'). It distinguishes itself from siblings like 'mio_ai_voice_generator' by focusing on cloning from a sample.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context on requirements (5-10 sec recording, credits, file deletion) but does not explicitly contrast with alternatives or specify when to use this tool instead of other voice tools. Usage guidance is implied but not comparative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_ai_voice_generatorBInspect
AI Voice Generator — Convert text to natural-sounding speech using AI — 6 voices in English and Spanish, with engine tiers for cleaner studio-grade output.. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description provides important behavioral details: credits vary, Day Pass excluded, files auto-delete in 24 hours, auditable logs, and credit pack usage. This covers billing and data lifecycle, but does not clarify failure modes or input limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One paragraph covering multiple aspects, but could be more structured. It front-loads the purpose but includes procedural and pricing details, which is acceptable but not highly optimized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no input schema and no output schema, yet the description does not explain how to invoke it or what it returns. The note about file auto-deletion implies an output file, but format and accessibility are missing. Incomplete for a tool with zero explicit parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero properties, yet the tool is described as converting text to speech. The description does not explain how the text is provided (e.g., via file upload or a separate parameter), leaving a semantic gap despite 100% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts text to natural-sounding speech, which is a specific verb+resource. However, it does not explicitly differentiate from the sibling 'mio_ai_voice_cloner', though the purpose is distinct enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like voice cloning. The description focuses on billing and file retention, not on usage context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_audio_compressorCInspect
Audio Compressor — Control dynamic range with professional compression. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that it runs in browser and uses more credits than Document/Image/Scanner workspaces. Notes credit limitations. However, missing key behaviors: how input is provided, whether it modifies original or creates new file, output format, potential side effects. Annotations are absent, so description carries burden but falls short.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description front-loads purpose but then adds several sentences about credit plans and pricing. While credit info is relevant, it makes the description longer than necessary. Could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no params and no output schema, description should explain input source, output format, and any constraints. It doesn't. Contextually incomplete: agent cannot determine how to invoke it or what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 properties, so description should explain how input is specified. It doesn't, leaving ambiguity. Baseline for 0 params is 4, but lack of input mechanism reduces it to 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it's an audio compressor that controls dynamic range. The verb 'control' and resource 'dynamic range' are specific. However, it doesn't differentiate from sibling audio tools like equalizer or denoise, which also affect dynamics. But it is distinct from video compressors.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this compressor versus other audio tools or video compressors. No prerequisites or context for optimal use. It only mentions credit consumption but not when to prefer it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_audio_converterCInspect
Audio Converter — Convert between MP3, WAV, FLAC, OGG, and AAC. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description mentions credit consumption and pricing, which is marginally behavioral, but does not disclose processing behavior (e.g., where files come from, how output is delivered, error states).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is overly long due to pricing/credits details that belong elsewhere. The essential functional info is brief, but extra sentences about credits, Day Pass, and pricing are irrelevant for tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description should explain the workflow (e.g., input file selection, output format specification). It fails to do so, leaving the agent without critical usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so schema coverage is 100%. The description adds meaning by enumerating supported formats (MP3, WAV, FLAC, OGG, AAC), which compensates for the lack of parameters. However, it does not explain how input/output files are specified.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts between MP3, WAV, FLAC, OGG, and AAC. However, it is mixed with pricing/credits info that dilutes the core purpose. A title would help, but the verb 'convert' and resource 'audio formats' are specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs siblings like mio_audio_compressor, mio_audio_denoise, or even format-specific converters like mio_video_to_mp3. No prerequisites or context provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_audio_denoiseCInspect
Audio Denoise — Remove background noise from recordings. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavior. It states the tool removes noise and uses more credits, but lacks details on mutation (e.g., what happens to the original file), permissions, or output format. The billing information is useful but does not cover behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat verbose, spending 50 words on credit and subscription details after the purpose statement. While it is structured, the length is not justified given the tool's simplicity, and the key functional description is brief.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description should clarify how the tool receives input and what it returns. It fails to do so, leaving the agent unsure about the workflow or required context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to add parameter info. It correctly omits parameter details, meeting the baseline of 4 for 0-param tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Remove background noise from recordings,' which is a specific verb+resource. However, it does not differentiate from siblings like 'mio_ai_audio_enhancer' or 'mio_video_denoise,' so it misses the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It only mentions credit consumption and workspace restrictions, which do not help an agent decide between similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_audio_equalizerAInspect
Audio Equalizer — Adjust bass, mid, and treble frequencies. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden of behavioral disclosure. It mentions the tool runs in the browser and consumes credits, but does not explain what happens to the input audio (e.g., whether it's modified destructively, output format, or if it requires an audio file to be loaded). This leaves ambiguity about the tool's side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4 sentences). The purpose is front-loaded. However, the last two sentences about pricing details (welcome credits, Day Pass, one-time credit pack) are somewhat tangential to tool selection and could be streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description explains the tool's basic function and credit usage. However, it lacks information about the expected output (e.g., processed audio file) and how to provide input (e.g., via a file selector). This gap reduces completeness for a tool with no explicit input schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters and 100% schema description coverage, so the description does not need to explain parameters. According to guidelines, 0 parameters yields a baseline of 4. The description does not add parameter-related information, but the schema itself is sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Audio Equalizer — Adjust bass, mid, and treble frequencies.' The verb 'adjust' and resource 'frequencies' are specific. It distinguishes from sibling tools (e.g., mio_ai_audio_enhancer, mio_audio_compressor) by focusing on equalization.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about when to use this tool by mentioning it runs in the browser and uses more credits than Document/Image/Scanner workspaces. It also clarifies credit limitations (welcome credits, Day Pass exclusion). However, it does not explicitly compare to other audio tools or state when to prefer this tool over alternatives like an enhancer or compressor.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_audio_fadeCInspect
Audio Fade — Add smooth fade-in and fade-out effects. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears the full burden. It notes that processing occurs in the browser and consumes more credits, but omits critical behavioral traits like whether the effect is destructive, reversible, or the expected output format. This leaves the agent with significant unknowns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is verbose, with over half the content devoted to credit and pricing details unrelated to core functionality. A concise statement about fade effects would suffice; the extra text detracts from the primary purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of output schema and annotations, the description should cover return values, side effects, or prerequisites. It only hints at credit usage, leaving out essential information such as audio file requirements, fade duration options, and expected output. The tool is critically incomplete for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters with 100% schema description coverage (vacuously). The description adds no parameter information since there are none, meeting the baseline of 3 as per guidelines.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The opening sentence 'Add smooth fade-in and fade-out effects' clearly states the tool's action and resource. However, the subsequent lengthy discussion about credit usage and pricing dilutes the focus and does not directly distinguish it from sibling tools like mio_video_fade.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about credit consumption and workspace accessibility, suggesting it's for users with appropriate credits. However, it does not explicitly state when to use this tool over alternatives (e.g., mio_video_fade) or specify prerequisites such as supported audio formats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_audio_reverbAInspect
Audio Reverb — Add room reverb and echo effects. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must fully disclose behavior. It mentions browser processing and credit consumption but fails to detail input requirements (e.g., audio format, file size), output format, reversibility, or any side effects. The credit info is helpful but insufficient for comprehensive behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, but the following 4-5 sentences on credit and plan details are somewhat verbose. While relevant, they could be condensed. The structure is adequate but not optimal for quick agent scanning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and no annotations, the description should be self-contained. It explains purpose and credit usage but omits input source (e.g., how to provide audio), file format/limits, processing duration, or what the tool returns. This leaves significant gaps for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters with 100% coverage (no params to describe). The description adds no parameter detail, but the baseline is 4 since there are no parameters needing explanation. The context about credits is not parameter-related but acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with 'Audio Reverb — Add room reverb and echo effects.' This clearly states the tool's action (add) and resource (reverb and echo effects). It effectively distinguishes it from sibling audio tools like denoise or equalizer, making purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides important usage context: it processes in the browser, uses more credits than Document/Image/Scanner workspaces, and notes that Day Pass does not include it. It also explains credit plans. However, it does not explicitly state when to use this tool over alternatives or mention exclusions, though the credit info guides appropriate use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_audio_speedCInspect
Audio Speed — Change audio playback speed without pitch distortion. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions it 'processes in the browser' and uses credits, which are behavioral traits beyond the trivial 'speed change' feature. However, it does not disclose important behaviors such as file format support, output format, or whether it modifies the original file. Since no annotations are provided, the description should carry more weight but fails to do so adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph with multiple sentences. The first sentence is functional, but the remaining sentences about credits and pricing are tangential to the tool's core behavior. While the information may be useful, it adds verbosity and could be condensed or moved to annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no input parameters and no output schema, the description should provide a complete picture of how to use it. It fails to explain what input is required (e.g., an audio file), how to specify speed, or what the output looks like. The credit information, while relevant, does not fill these gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, so the baseline is 4. However, the description does not explain how the tool receives its input (e.g., a file upload in the browser). This omission reduces the semantic value, as an agent might not understand how to invoke the tool without parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the primary action: 'Change audio playback speed without pitch distortion.' The verb is specific, and the resource is clear. However, it is not explicitly distinguished from the sibling tool 'mio_video_speed' which may also affect audio speed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool over alternatives like 'mio_audio_denoise' or 'mio_video_speed'. It does not mention prerequisites, limitations, or typical use cases, leaving the agent without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mioffice_list_toolsAInspect
List all 155+ MiOffice applications with pricing tier per tool. Optionally filter by category.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Category filter — use "all" for the complete catalog | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While it states this is a listing/filtering operation (implied read-only), it doesn't disclose important behavioral aspects like pagination, rate limits, authentication requirements, error conditions, or what the output format looks like. For a tool that presumably returns a potentially large list (125+ items), this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (two short sentences) with zero wasted words. It's front-loaded with the core purpose ('List all 125+ MiOffice applications') followed by the key usage detail. Every word earns its place in this efficient description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a listing tool with no annotations and no output schema, the description is insufficiently complete. It doesn't explain what information is returned about each application, how results are structured, whether there's pagination for 125+ items, or any error handling. For a discovery tool that should help users understand available options, more context about the return format would be valuable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single parameter with its enum values and default. The description mentions filtering by category but adds no additional semantic context beyond what's in the schema. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all 125+ MiOffice applications') and resource ('MiOffice applications'), distinguishing it from sibling tools which are all specific conversion/processing tools rather than listing tools. It provides concrete scope information (125+ applications) that isn't obvious from the name alone.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('Filter by category'), but doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools. The agent can infer this is for discovery/listing rather than processing, but no explicit exclusion guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mioffice_open_toolCInspect
Return the URL for a specific MiOffice application by name or search term. Includes pricing info so the user knows the cost before invoking.
| Name | Required | Description | Default |
|---|---|---|---|
| toolName | Yes | Tool name, MCP id (e.g. "mio_face_swap"), or search term (e.g. "voice clone") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves URLs but does not explain how the search works (e.g., exact match, partial match), error handling, or performance aspects like rate limits. This is a significant gap for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, consisting of two clear sentences: 'Get the URL for any MiOffice tool. Search by name or tool key.' Every word contributes to understanding the tool's purpose without unnecessary elaboration, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a search tool with no annotations, no output schema, and 100% schema coverage for one parameter, the description is incomplete. It lacks details on search behavior, result format, error cases, and how it differs from siblings like 'mioffice_list_tools.' This makes it inadequate for full agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'toolName' documented as 'Tool key or search term (e.g., "merge pdf", "remove background").' The description adds minimal value by mentioning 'Search by name or tool key,' which aligns with the schema but does not provide additional syntax or format details. Baseline 3 is appropriate as the schema handles most of the parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the URL for any MiOffice tool' with the action 'Search by name or tool key.' It specifies the verb ('Get'), resource ('URL for any MiOffice tool'), and method ('Search'), but does not explicitly differentiate it from sibling tools like 'mioffice_list_tools' which might list tools rather than retrieve URLs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions searching by name or tool key but does not specify scenarios, prerequisites, or exclusions, such as when to use 'mioffice_list_tools' for listing tools instead. This lack of context leaves usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mioffice_pdf_editorBInspect
Open MiOffice PDF Editor — annotate, highlight, fill forms, sign, watermark. Light WASM, free on Day Pass.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It implies opening an editor for interactive use but doesn't specify whether this launches a UI, requires user authentication, has rate limits, or what happens upon invocation (e.g., opens a file or starts a session). For a tool with zero annotation coverage, this is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: a single sentence that directly states the tool's purpose with examples. Every word earns its place, and there's no wasted text or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (opening an editor likely involves interactive or session-based behavior), lack of annotations, and no output schema, the description is insufficient. It doesn't explain what 'Open' entails (e.g., returns a session ID, launches an application), what happens after invocation, or any error conditions. This leaves significant gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it appropriately avoids discussing parameters. A baseline of 4 is assigned as it correctly handles the no-parameter case without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Open the MiOffice PDF Editor — annotate, highlight, fill forms, sign, and more.' It specifies the verb 'Open' and resource 'MiOffice PDF Editor', with examples of actions possible. However, it doesn't explicitly differentiate from siblings like 'mioffice_open_tool' or 'mioffice_pdf_merge', which slightly reduces clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions general PDF editing capabilities but doesn't specify prerequisites, when-not-to-use scenarios, or compare to siblings like 'mioffice_pdf_compress' or 'mioffice_pdf_split'. This leaves the agent with minimal usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mioffice_pricing_infoAInspect
Return MiOffice's current pricing model — welcome credits, Day Pass, and one-time credit packs. LLM should relay this to users before invoking any paid tool.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description correctly identifies a read-only operation returning pricing data. Could mention authentication or free access, but overall transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, no fluff. First sentence states purpose, second provides usage guidance. Efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No parameters or output schema; description fully covers purpose and usage context. Complete for a simple info-retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema (coverage 100%); description adds value by detailing what the tool returns, compensating for the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns MiOffice's current pricing model, listing specific components (welcome credits, Day Pass, credit packs). Distinguishes from sibling tools which are processing tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs LLM to relay pricing info before invoking any paid tool, providing clear context. Lacks explicit alternatives or when-not-to-use, but sufficient given no competing tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_avif_to_jpgCInspect
AVIF to JPG — Convert AVIF images to JPG format. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It states the tool 'Runs in the browser' and offers credit/workspace information, but fails to disclose behavioral traits such as what happens to the source file, any quality settings, or processing limits. The core conversion action is implied but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the conversion action, but the remainder discusses credits and pricing, which is not essential for tool invocation. It is somewhat verbose for a simple conversion tool, though still readable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description should explain how input is supplied and what the output is. It only says 'Convert AVIF images to JPG format.' It omits context like file selection mechanism or output location. For a simple tool, this is a significant gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema coverage is 100%. Baseline for 0 params is 4, but the description does not explain how the agent provides the input file (e.g., 'select an AVIF file from the workspace'). This omission reduces the score to 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts AVIF images to JPG format using 'AVIF to JPG — Convert AVIF images to JPG format.' This is a specific verb+resource pair. However, it does not distinguish itself from sibling conversion tools (e.g., mio_image_heic_to_jpg, mio_image_webp_to_png) beyond the format names, so it loses one point.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Runs in the browser' and pricing details, but no when/when-not criteria or alternative tool recommendations. The agent is left to infer usage from the format name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_compressCInspect
Compress Image — Reduce image file size while maintaining quality. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description must disclose behavioral traits. It mentions 'Runs in the browser' but does not explain if processing is client-side or server-side, nor does it describe potential side effects, success criteria, or error handling. Critical behavioral details like supported image formats or output quality are omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description front-loads the purpose but then diverges into detailed billing information that is not essential for tool usage. The pricing sentences are verbose and could be shortened or moved. Overall, it is not optimally concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, output schema, or annotations, the description should provide a complete understanding of how to use the tool. It fails to explain how an image is provided (e.g., file upload, workspace reference), what formats are supported, or what the expected outcome is. The description is incomplete for an agent to use effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so there is nothing to describe. The description does not need to add parameter meaning. With 0 parameters, baseline is 4, and the description adds no confusion.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compresses images to reduce file size while maintaining quality. However, it does not differentiate from sibling 'mio_image_compress_webp', leaving ambiguity about the output format or compression algorithm.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description focuses on billing and credit coverage but provides no guidance on when to use this tool versus alternatives like 'mio_image_compress_webp' or other image tools. There is no mention of prerequisites, limitations, or optimal scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_compress_webpCInspect
Compress WebP — Reduce WebP image file size. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states 'Runs in the browser' (odd for MCP) but omits critical behavioral details: Does it replace the original file? Is it idempotent? What are the side effects? Billing info does not substitute for operational transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The first sentence is concise and front-loaded. However, the remaining three sentences focus on billing and pricing, which are secondary for an agent selecting a tool. This extra content reduces conciseness without adding operational value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description should explain how to invoke the tool (e.g., provide a file, use current context). It fails to do so, leaving the agent to guess. Billing details are irrelevant to completeness for tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so baseline is 4. The description does not clarify how input is provided (e.g., via context or file upload), but the absence of parameters means no additional explanation is strictly needed. Billing details are irrelevant but not harmful.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The first sentence clearly states the tool's purpose: 'Compress WebP — Reduce WebP image file size.' This provides a specific verb and resource. However, it does not distinguish this tool from sibling 'mio_image_compress', which may also handle WebP compression.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidelines are provided on when to use this tool versus alternatives like 'mio_image_compress'. The description focuses on billing details (credits, day pass) rather than invocation context or prerequisites. An agent cannot determine when this tool is the appropriate choice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_convertCInspect
Convert Image — Convert images between formats. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must disclose behavioral traits. It only mentions that the tool 'runs in the browser,' but does not describe any operational behavior, such as file size limits, conversion process, or whether it alters the original file. This is insufficient for a zero-parameter tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, but much of the text (about credits and day passes) is verbose and not directly relevant to tool invocation. It could be trimmed to a shorter form without losing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no parameters, no output schema, and no annotations, the description fails to explain how to use it (e.g., how to provide the image for conversion). It addresses only billing context, leaving a significant gap in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no properties, so schema description coverage is 100% trivially. The baseline is 3. However, the description does not explain how the tool receives the image to convert (e.g., via file upload or context), which would add meaningful context. It neither adds nor detracts significantly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Convert images between formats,' indicating the tool's purpose. This distinguishes it from sibling tools that target specific format pairs (e.g., mio_image_avif_to_jpg), but the lack of explicitly stated supported formats slightly reduces clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this general converter over sibling tools that handle specific format conversions. It focuses entirely on billing and credits, with no mention of context or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_cropCInspect
Crop Image — Cut out a portion of your image. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose all behavioral traits. It mentions 'Runs in the browser' (client-side processing) but fails to describe destructive behavior, authentication needs, or output format. Billing info is provided but does not cover tool-specific behavioral aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences; the first is concise and front-loaded. However, the subsequent three sentences focus on billing, which is not essential for tool invocation. It could be more concise by moving pricing details elsewhere.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of input schema, output schema, and annotations, the description is incomplete. It does not state input source (e.g., file upload), output format, or any size limitations. The agent lacks essential context to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and the description does not explain how to specify the crop region (e.g., dimensions, coordinates). This omission leaves the agent without critical invocation details. While zero-parameter tools have a baseline of 4, the description fails to compensate for the lack of schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'Crop Image — Cut out a portion of your image,' which clearly indicates the action and resource. It distinguishes from sibling image tools (e.g., resize, rotate) but does not explicitly differentiate from them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides billing and execution context ('Runs in the browser') but offers no guidance on when to use cropping versus other image editing tools. It lacks alternative suggestions or usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_heic_to_jpgBInspect
HEIC to JPG — Convert iPhone HEIC photos to JPG format. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. It only states that the conversion runs in the browser, but fails to mention whether the original file is preserved, if there are any limits, or what happens to metadata. For a conversion tool, more transparency is expected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, starting with a clear statement of purpose. The additional sentences about pricing are somewhat extraneous but do not detract significantly. It is front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no parameters, no output schema, and no annotations, the description covers the basic purpose and pricing context. However, it lacks details such as whether the conversion is reversible, whether the original file is altered, or any size/format constraints. These gaps reduce completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and the description adds no parameter information because none exist. Schema description coverage is 100% (no undocumented parameters). As per guidelines, zero parameters result in a baseline of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts iPhone HEIC photos to JPG format, specifying both source and target. It also notes it runs in the browser. While the name already indicates the conversion, the description adds context that it is for iPhone photos. However, it does not differentiate from sibling tools like mio_image_heic_to_png, relying on the name for distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It only mentions pricing and that it runs in the browser, but does not specify conditions like file size limits or preferred scenarios. An agent would have no basis to choose this over similar conversion tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_heic_to_pngBInspect
HEIC to PNG — Convert iPhone HEIC photos to PNG format. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full burden for behavioral disclosure. It only states 'Runs in the browser' and mentions credit coverage. It does not disclose whether the conversion is destructive, authentication needs, rate limits, or what happens to inputs/outputs. This is minimal transparency for a conversion tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The core purpose is front-loaded, but the description includes verbose pricing information that could be in a separate annotation or tool. While not excessively long, the pricing details are not essential for tool invocation, making it less concise than ideal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no parameters and no output schema. The description does not explain how the user provides input (e.g., file upload) or what the output is. This leaves a significant gap for an agent to know how to invoke the tool correctly. Essential context missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema coverage is trivially 100%. The description adds no parameter details because none exist. Baseline for zero parameters is 4, and since there is no need for compensation, score remains 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts iPhone HEIC photos to PNG format, with a specific verb and resource. The name also includes 'heic_to_png', and the description explicitly mentions 'HEIC to PNG — Convert iPhone HEIC photos to PNG format.' This distinguishes it from siblings like mio_image_heic_to_jpg.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context on pricing and workspace usage, but does not explicitly state when to use this tool over alternatives (e.g., when to choose PNG vs JPG). It implies usage for HEIC to PNG conversion but lacks direct guidance on selection among siblings or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_jpeg_to_jpgAInspect
JPEG to JPG — Convert JPEG images to JPG format. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description only mentions 'Runs in the browser' but does not disclose side effects, authentication needs, rate limits, or whether original image is altered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is relatively concise but includes billing details that, while useful, could be trimmed for brevity. Overall structure is clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose and billing but lacks detail on input mechanism, output format, and any constraints (e.g., file size limits). Adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema. Description does not explain how the input image is provided (e.g., file selection, URL), which is critical for a tool with no params.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states converting JPEG images to JPG format. Specific verb and resource, easily distinguished from siblings like mio_image_heic_to_jpg.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context about billing (credits, day pass) and that it runs in browser, but does not explicitly state when to use versus alternatives or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_jpg_to_webpAInspect
JPG to WebP — Convert JPG images to WebP for smaller file sizes. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It mentions 'Runs in the browser' and credit coverage, but fails to explain important behaviors like input mechanism (how to provide the JPG), output format, file size reduction specifics, or error handling. This is insufficient for a conversion tool with zero annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences, front-loaded with the action. The pricing and credit details are relevant but could be considered extra for a tool description. It is efficient without being overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description should cover input mechanism and output type. It explains purpose and credit usage but omits how the agent provides the JPG image (file, URL?) and what the output is (download, URL?). This is a gap for a conversion tool, making it minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema coverage is 100% (empty). The description does not add parameter info because there are none. However, it implies an input image is needed, but does not explain how it is provided. With no parameters, a baseline score of 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Convert JPG images to WebP for smaller file sizes', which specifies the verb (Convert) and resource (JPG images to WebP). It distinguishes from sibling tools like mio_image_png_to_webp (different input) and mio_image_compress_webp (compression, not conversion).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on where the tool runs (browser) and credit usage, but does not explicitly state when to use this tool versus alternatives like mio_image_compress_webp or other conversion tools. The purpose is clear enough for the agent to select it for JPG-to-WebP conversion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_png_to_svgBInspect
PNG to SVG — Convert raster images to scalable vector graphics. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions 'Runs in the browser' as a behavioral trait, but lacks other important details such as whether the tool preserves transparency, handles large files, or provides quality options. Without annotations, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core functionality, followed by pricing details. While the pricing section adds length, it is still concise for the information conveyed. Could be slightly tighter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple, but the description omits expected output format, download/display behavior, file size limits, or any step-by-step instructions. With no output schema, more operational context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, so schema coverage is 100%. Baseline is 3. The description does not add parameter-specific details beyond the tool's purpose, which is acceptable but not enhancing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'PNG to SVG' and 'Convert raster images to scalable vector graphics', which is a specific verb+resource pair. It distinguishes from sibling tools that handle other formats or operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for PNG to SVG conversion, but provides no explicit guidance on when to use this tool versus alternatives (e.g., other image conversion tools). No when-not-to-use or alternative suggestions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_png_to_webpAInspect
PNG to WebP — Convert PNG images to WebP for smaller file sizes. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears the full burden of behavioral disclosure. It mentions the tool runs in the browser and covers credit/pricing details, but does not disclose important behaviors such as whether it preserves transparency, handles large files, or any rate limits. This is insufficient for a tool with zero annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise and front-loaded with the core conversion purpose. However, it includes unnecessary pricing and credit details that could be presented elsewhere, adding slight verbosity for a simple conversion tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema and no parameters, but the description fails to explain how the input image is provided or how the output WebP is returned. Important context about the conversion workflow (e.g., file upload, download) is missing, making it incomplete for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema coverage is 100%. The description adds no parameter-specific information, but with no parameters to document, a baseline of 4 is appropriate as the description does not need to compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the purpose: converting PNG images to WebP format for smaller file sizes. It includes a concise verb-resource pair ('Convert PNG images to WebP') and distinguishes itself from sibling tools like general converters or other format-specific tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool versus alternatives like 'mio_image_convert' or other PNG-to-* tools. It implies usage for compression but lacks comparative context or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_resizeCInspect
Resize Image — Change image dimensions to any size. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description notes that it runs in the browser, which is a useful behavioral detail. However, it does not disclose whether resizing preserves aspect ratio, supported formats, or if it is destructive. With no annotations, more disclosure would be expected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description starts with a clear statement but then devotes most of its length to credit and pricing details. It could be more concise and front-load functional information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and many sibling tools, the description does not adequately explain output behavior or input requirements. It omits critical details like aspect ratio handling and supported formats.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, which is unusual for a resize tool. The description does not clarify how to provide input dimensions or the image itself. It contradicts the expectation set by the tool name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool resizes images to any size, which is clear. However, it does not differentiate from sibling tools like crop or compress that might also modify dimensions. The pricing information dilutes the core purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool vs other image tools or how to specify dimensions. The description focuses on credits and pricing rather than usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_rotateBInspect
Rotate Image — Rotate and flip images. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond 'runs in the browser', no behavioral traits are disclosed. With no annotations, the description should cover aspects like synchronicity, side effects, or input requirements. The credit/pricing info is about access, not operation behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description includes several sentences about pricing and credits that are irrelevant to the tool's function. This distracts from the core purpose and makes it unnecessarily verbose for a simple rotation tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description is incomplete. It does not explain how the image is provided (e.g., from workspace context) or what the result looks like. The pricing info adds no operational completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so the baseline is 4. The description adds no parameter info, but none is needed since schema coverage is 100% and no parameters exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Rotate Image — Rotate and flip images', which is a specific verb and resource. It immediately conveys the tool's purpose without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like mio_video_rotate or other image editing tools. The description only implies it's for images (due to 'image' in name) but does not explicitly state usage context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_upscaleBInspect
Upscale Image — Enlarge images using AI while preserving quality. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses AI Studio run, Modal workers, credit variability, exclusion from Day Pass, 24-hour file auto-deletion, retention auditability, and credit unlock model. This is thorough for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description contains several useful details but is somewhat verbose with policy information. It could be more concise while retaining key points. It is adequately structured with a main action followed by context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (credit system, AI worker dispatch) and no output schema, the description is incomplete. It fails to specify how to invoke the tool (missing input parameters). The agent cannot determine what to pass. The description covers behavioral aspects but lacks essential invocation context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, but the description mentions 'model and file size' which implies parameters are needed. The description does not explain how to provide the image or select a model, leaving the agent without necessary input semantics. Schema coverage is 100% only because the schema is empty, but the description compensates poorly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Upscale Image — Enlarge images using AI while preserving quality.' The verb 'Upscale' and resource 'Image' are specific. However, it does not differentiate from the sibling 'mio_ai_upscale_pro', which likely does similar upscaling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions credit-based usage, file auto-deletion, and pricing, but it does not specify when to use this tool vs alternatives like 'mio_ai_upscale_pro' or other image processing tools. Usage context is implied but not explicitly guided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_webp_to_jpgAInspect
WebP to JPG — Convert WebP images to JPG format. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds that it 'runs in the browser' and mentions the credit model, which provides some behavioral context beyond the tool name. However, it lacks details like file size limits, synchronous behavior, or return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise and front-loaded with the purpose. The credit information adds some length but is relevant for usage context. Could be slightly tighter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (no parameters, no output schema), the description covers the basic function and execution context. However, it omits details about the output format or any limitations, leaving some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the baseline is 4. The description does not provide parameter semantics, but none are needed. It adds no extra parameter info, but this is acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Convert WebP images to JPG format', matching the tool's name and clearly distinguishing it from sibling format converters like jpg_to_webp or webp_to_png.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, or any prerequisites. The credit information implies cost considerations but does not help an agent decide which conversion tool to invoke.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_image_webp_to_pngAInspect
WebP to PNG — Convert WebP images to PNG format. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions 'Runs in the browser,' indicating a client-side operation, which adds some behavioral context. However, with no annotations provided, the description bears full burden; it does not disclose details like file size limits, processing time, or output format specifics (e.g., whether the PNG is downloaded or returned).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core purpose. It uses four sentences, the first two explaining the function and environment, and the last two on billing. While billing context may be useful, it slightly dilutes the functional core; however, it remains concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple conversion tool with no parameters and no output schema, the description covers the purpose, runtime environment, and billing. It omits details about output handling (e.g., download link), but the tool is straightforward and the description is largely complete given the simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema description coverage is 100% (trivially). Per guidelines, baseline for 0 parameters is 4. The description does not need to add parameter information, and it appropriately focuses on the operation and billing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts WebP images to PNG format ('Convert WebP images to PNG format'), which is a specific verb+resource. However, it does not differentiate from sibling tools like mio_image_webp_to_jpg, as it only mentions the format without contrasting with alternative output formats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for converting WebP to PNG but provides no guidance on when to use this tool versus alternatives (e.g., when to choose PNG over JPG). There are no exclusions or context for selection among the many conversion tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_jpg_to_pdfAInspect
JPG to PDF — Convert JPG images into a PDF document. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It states 'Runs in the browser' but does not disclose other important behaviors such as whether the conversion is client-side only, if any data is uploaded, authentication requirements, or rate limits. The pricing info is tangential behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of 5 sentences, but only the first sentence describes the tool's core functionality. The remaining sentences focus on pricing and credits, which are less immediately relevant for tool invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description provides the essential purpose and a note about browser execution. However, it lacks details on whether multiple JPGs are accepted, output naming conventions, or process limitations. The inclusion of pricing adds context but not operational completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema description coverage is 100%. According to the rubric, baseline 4 applies. The description adds no parameter meaning beyond the empty schema, but that is acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'JPG to PDF — Convert JPG images into a PDF document', which clearly identifies the verb (convert) and resource (JPG to PDF). It distinguishes from sibling tools like mio_png_to_pdf and mio_pdf_image_to_pdf by specifying the input format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool versus alternatives, such as other image-to-PDF converters. Usage is implied by the tool name, but no exclusions or comparisons are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_notes_notesCInspect
Notes — Private notes with real-time collaboration. No account, no cloud.. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must fully disclose behaviors. It mentions it runs in browser and has no account/cloud, but does not describe what happens when invoked (e.g., opens a note editor?), whether it is read-only or mutable, or any side effects. This is insufficient for behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is overly verbose about pricing and licensing details that are not directly relevant to tool usage. It front-loads the tool name but then provides unnecessary information. Every sentence should add value for tool selection and invocation, but much of it is about credits and subscriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description is minimally complete. It covers licensing but lacks operational context (e.g., what note is created? how collaboration works?). The agent lacks information to reliably invoke the tool beyond knowing it exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema description coverage is 100%. The description adds no parameter information, which is acceptable because there are none. The baseline for zero parameters is 4, and the description does not detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Notes — Private notes with real-time collaboration', indicating the tool is for note-taking with collaboration. However, it doesn't specify the exact action (create, edit, view) or the resource type, leaving some ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The sibling tools are mostly AI and media utilities, so the context suggests it's separate, but no explicit when-to-use or when-not-to-use instructions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_p2p_file_transferBInspect
Transfer Files — Transfer files directly between devices. No cloud, no upload, fully encrypted.. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits. It mentions the transfer is direct, no cloud, encrypted, and runs in browser, but omits critical details like whether both devices must be online, file size limits, or how the transfer is initiated. The pricing info is access-related, not behavioral.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but includes extraneous pricing and access details that could be moved to a separate tool or documentation. The core purpose is front-loaded, but the extra sentences dilute conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool is for file transfer (moderately complex) with no parameters and no output schema, the description fails to explain the workflow, such as how to select files or recipients, or how the transfer is managed. This is incomplete for an AI agent to use effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so baseline is 4. The description does not explain why there are no parameters or how the file and recipient are specified, but since the schema itself is empty, the description does not need to add parameter meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Transfer Files — Transfer files directly between devices. No cloud, no upload, fully encrypted.' This is a specific verb ('Transfer') and resource ('files'), clearly distinguishing it from sibling P2P tools like screen sharing or session handoff.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for direct file transfers but does not explicitly state when to use it vs alternatives or when not to use it. No mention of prerequisites or when this tool is preferable over other P2P tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_p2p_session_handoffAInspect
Device Handoff — Continue your session on another device. Scan QR code to transfer your current page.. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It states 'Runs in the browser' and discusses pricing, but does not describe side effects, whether the original session ends, idempotency, or prerequisites. The agent lacks crucial behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is about five sentences, including extraneous pricing information ('covered by signup welcome credits...') that does not aid tool selection. It could be more concise by removing billing details, which are better suited for a pricing tool or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no input/output schema and no annotations, the description lacks completeness. It does not explain the outcome of calling the tool (e.g., does it generate a QR code? transfer the session automatically?), nor does it provide a step-by-step process. The agent cannot fully predict the tool's behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters in the input schema (100% schema coverage). Per guidelines, a baseline of 4 is appropriate since the description does not need to add parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Device Handoff — Continue your session on another device. Scan QR code to transfer your current page.' It specifies the verb 'transfer' and the resource 'session/page', and it distinguishes itself from sibling tools like file transfer and screen share.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context: 'Continue your session on another device' and 'Scan QR code'. It implies when to use this tool but does not explicitly mention when not to use it or compare with alternatives like mio_p2p_file_transfer. It is clear but lacks exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_compressAInspect
Compress PDF — Reduce PDF file size by compressing images. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds that it 'Runs in the browser' and mentions credit coverage, providing behavioral context beyond the empty schema. However, it does not disclose whether the tool is destructive, creates a new file, or has limits (e.g., file size). With no annotations, more behavioral detail would be beneficial.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description front-loads the purpose effectively. However, it includes several sentences about credit and pricing details that, while informative for billing context, are not strictly necessary for core functionality. Still, it remains relatively concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema, no annotations), the description is reasonably complete: it explains the action, method, runtime location, and credit coverage. It lacks details about output behavior or file size limits, but overall it covers essential aspects for a straightforward tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema description coverage is 100%. Per guidelines, baseline score is 4. The description adds no parameter-specific information, but since there are no parameters, no additional meaning is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'Compress PDF — Reduce PDF file size by compressing images,' clearly stating the verb (compress), resource (PDF), and method (image compression). This sufficiently distinguishes it from sibling tools like merge, split, etc., by specifying the action and scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool versus alternatives. It implies typical compression use cases but lacks guidance on exclusions or comparisons to other compression-related tools (e.g., mio_image_compress). No 'when not to use' or alternative suggestions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_editorCInspect
PDF Editor — Edit PDFs: add text, mask content, annotate, OCR. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions it runs in the browser but omits behavioral traits like side effects, file handling, or processing limits. No annotations present to compensate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description includes excessive pricing information that is not core to tool usage. The core functionality is front-loaded but the additional details reduce conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema defined; the description does not indicate what the tool returns. Pricing details are irrelevant to usage. The sibling mioffice_pdf_editor exists but is not differentiated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Though there are no parameters, the description should explain how the tool is invoked or what input is needed. It does not clarify that no input is required in the API sense.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it edits PDFs with specific actions (add text, mask content, annotate, OCR), distinguishing it from sibling PDF tools like merge or compress.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs. other PDF tools or the sibling mioffice_pdf_editor. The description lacks selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_epub_to_pdfAInspect
EPUB to PDF — Convert EPUB ebooks to PDF format. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so the description must cover behavior. It mentions 'Runs in the browser' and credit usage, but does not disclose how input is provided (no parameters), file size limits, or output handling. The browser execution hint is useful but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at ~60 words with clear front-loading of the core action. Each sentence adds value: conversion type, browser execution, credit coverage, and pricing reference. Minor redundancy in credit details could be trimmed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with no output schema, the description explains the conversion action and credit model but lacks clarity on how the user provides the input EPUB file. This omission reduces completeness for an agent needing to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema is empty (0 parameters), so no parameter details are needed. Baseline is 4 per rubric. The description adds no parameter-specific information, which is acceptable given the absence of parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'EPUB to PDF' and 'Convert EPUB ebooks to PDF format', providing a specific verb and resource. It distinguishes from sibling tools like mio_jpg_to_pdf and mio_pdf_from_word by clearly indicating the input format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for EPUB to PDF conversion but does not explicitly state when to use or avoid this tool relative to siblings. No alternative tools or exclusions are mentioned, relying on the tool name to differentiate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_from_wordBInspect
Office to PDF — Convert Word, PowerPoint, and OpenOffice documents to PDF. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden of behavioral disclosure. It mentions 'runs in the browser' and billing, but does not explain how to provide input, what the output is, or any side effects. Moderate transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph, front-loaded with the core purpose. However, it includes unnecessary billing details (e.g., 'Covered by signup welcome credits...') that could be condensed. Still fairly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no input schema parameters and no output schema, the description should explain the input mechanism and return value. It omits how to provide the source document, leaving the tool's complete context unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, but the description does not clarify how the document is supplied (e.g., via file upload or other context). The schema coverage is 100% trivial, but the description fails to compensate for the missing parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts Word, PowerPoint, and OpenOffice documents to PDF, using specific verbs and resource names, effectively distinguishing it from sibling PDF conversion tools like mio_jpg_to_pdf or mio_pdf_to_doc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks guidance on when to use this tool versus alternatives among the many PDF-related siblings. It only mentions billing, not contextual scenarios or prerequisites for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_image_to_pdfAInspect
Image to PDF — Convert any image to PDF document. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided; the description focuses on credit/pricing info but does not disclose behavioral traits such as whether the conversion is lossless, if input files are retained, or authentication requirements. The credit mention is useful but insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with a clear first sentence. The additional credit/pricing details, while informative, are not essential for tool usage and add some fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool, the description covers purpose, runtime environment, and billing. However, it lacks accepted image formats and size limits, which would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so baseline is 4. The description adds no parameter info, which is acceptable since no parameters exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Convert any image to PDF document' with a specific verb and resource. It distinguishes itself from sibling tools like mio_pdf_png_to_pdf and mio_jpg_to_pdf by covering all image types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus the format-specific converters. The context of 'any image' implies broader use, but alternatives are not mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_mergeBInspect
Merge PDF — Combine multiple PDF files into one document. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Key behavioral traits are missing: no mention of file size limits, number of files, data handling (e.g., upload vs local processing), or the output format. The description states it 'runs in the browser' and provides billing details, but these do not sufficiently inform the agent of operational constraints or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence. The following sentences about billing and credit coverage, while not essential for an agent, are not overly verbose. Could be more concise if billing info were moved to a separate annotation, but still reasonably structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description covers basic purpose and non-functional context (billing, browser execution). However, it omits important operational details like file format constraints, processing limits, or output characteristics, leaving completeness gaps for a production agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema_description_coverage is trivially 100%. The description adds value by stating the tool merges PDFs, but it does not clarify how files are specified (e.g., via workspace selection). Baseline score of 3 is appropriate as the schema fully covers parameters, and the description adds marginal context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with 'Merge PDF — Combine multiple PDF files into one document,' which is a clear, specific verb+resource statement. The title and summary directly indicate the tool's function, and it is easily distinguishable from sibling tools like split, compress, or rotate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool over alternatives or when not to use it. It lacks comparisons to other PDF tools or edge cases. The billing info is tangential and does not help an agent decide the correct context for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_png_to_pdfAInspect
PNG to PDF — Convert PNG images to PDF documents. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description lacks behavioral details beyond stating 'Runs in the browser' and credit usage. No annotations are present, so the description carries full burden. It does not disclose how input is provided (e.g., file upload, URL), whether multiple PNGs are supported, or what the output format is (e.g., downloadable file, URL).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences. The first sentence immediately states the purpose (PNG to PDF conversion). Subsequent sentences add relevant but separate context (credit usage). No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description explains purpose and credit model, it is incomplete regarding the actual use of the tool (how to provide input, how output is returned). There is no output schema, so the description should clarify the result format, but it does not.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters in the input schema, and schema description coverage is 100% (trivially). According to the rubric, zero parameters result in a baseline of 4. The description adds no additional parameter information, which is acceptable since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts PNG images to PDF documents, starting with 'PNG to PDF' and elaborating 'Convert PNG images to PDF documents.' This is specific to PNG input, distinguishing it from sibling tools like mio_jpg_to_pdf (JPG to PDF) or mio_image_png_to_svg.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description focuses on credit and pricing details but does not mention sibling tools or scenarios where other converters (e.g., mio_jpg_to_pdf) might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_remove_pagesCInspect
Remove PDF Pages — Remove specific pages from a PDF document. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It notes the tool 'runs in the browser' but fails to disclose whether it modifies the original file, returns a new PDF, or any success/failure behaviors. The credit and pricing info is irrelevant to behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is fairly concise with only three sentences. The first sentence effectively states the purpose. However, the two sentences on credits could be considered tangential, slightly reducing focus.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description is incomplete. It does not explain what happens after page removal (e.g., download link, replacement), nor how the user selects pages. The pricing info does not fill these gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0 parameters, so description should clarify how inputs (e.g., which PDF and pages) are provided implicitly via context. It does not, instead focusing on pricing. Baseline 4 is not met because the description adds no semantics for the tool's operation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Remove') and resource ('PDF Pages') and specifies the action of removing specific pages from a PDF document. However, it does not explicitly differentiate from sibling tools like mio_pdf_editor, which may also allow page removal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like mio_pdf_split or mio_pdf_editor. It mentions credit coverage but lacks context on prerequisites or workflow (e.g., the PDF must be previously uploaded).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_rotateBInspect
Rotate PDF — Rotate PDF pages to any angle. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The only behavioral information is 'Runs in the browser,' but it fails to explain critical aspects like how to provide the PDF file or what angles are supported. With no annotations, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The purpose line is concise, but the extensive pricing details are not directly relevant to tool invocation and could be trimmed to improve focus.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description omits essential context such as how the user provides a PDF, supported rotation angles, and output behavior. This is a significant gap for a tool with no parameters and no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no properties, and the description does not add any parameter-related meaning. Since schema coverage is 100% (empty), the baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'rotate' and the resource 'PDF pages' at the beginning, distinguishing it from similar sibling tools like 'mio_image_rotate' and 'mio_video_rotate'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives, nor does it mention prerequisites or expected input context. It focuses heavily on pricing instead of usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_splitBInspect
Split PDF — Extract specific pages from a PDF into a new file. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description lacks disclosures about whether original file is preserved, file size limits, privacy, or processing behavior. Only states 'runs in browser' but no user-facing behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a bit lengthy with pricing details that are not essential for tool selection. Could be more concise while retaining core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Missing critical context: no mention of required input (e.g., a PDF file), how to specify page range, output format, or success criteria. For a tool with no params, description should clarify the interaction model.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so schema coverage is 100%. Baseline 3 is appropriate; description does not add parameter info because there are none, but fails to explain how pages are specified (e.g., via UI).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action (split/extract) and resource (specific pages from PDF into a new file). Distinguishes from siblings like merge, remove pages, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions credit and pricing context but does not specify when to use this tool vs alternatives, nor prerequisites like having a PDF file or how to specify pages.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_tiff_to_pdfAInspect
TIFF to PDF — Convert TIFF images to PDF documents. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses browser execution and credit usage, which are important behavioral traits. Without annotations, the description provides moderate transparency but omits details like file size limits or privacy implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core purpose, followed by relevant execution and pricing details. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description adequately covers purpose, execution context, and pricing. It lacks details on input format variants or output properties but is sufficient for a simple conversion tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has no parameters, so no additional explanation is needed. The description does not mislead and the schema coverage is 100%.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Convert TIFF images to PDF documents' uses a specific verb and resource, clearly stating the tool's function. It distinguishes from sibling conversion tools like mio_jpg_to_pdf by explicitly mentioning TIFF.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context on execution (browser) and pricing model (credits, Day Pass), which helps in deciding when to use. However, it does not explicitly contrast with alternative tools or specify conditions to avoid.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_to_docBInspect
PDF to DOC — Convert PDF files to editable Word documents. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits; it only mentions 'Runs in the browser' and pricing, but omits how input is provided, output format details, file size limits, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short and front-loaded, but the pricing details could be considered extraneous for core functionality. Still, it remains efficient with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description fails to explain how the PDF is provided or what the output looks like, leaving an agent uncertain about invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description has no need to explain them. Baseline 4 is appropriate as there is no missing parameter info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Convert PDF files to editable Word documents', clearly identifying the function and distinguishing it from other PDF-to-X tools among siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide guidance on when to use this tool versus alternatives like pdf_to_text or pdf_to_xlsx. It only mentions credit coverage, not usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_to_jpgAInspect
PDF to JPG — Convert PDF pages to high-quality JPG images. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It mentions running in browser and billing, but does not disclose whether the conversion is destructive, file handling details, or output specifics beyond 'JPG images'. This is adequate for a simple conversion but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each serving a purpose: first defines the tool, second provides billing context. No redundancy or unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description covers the essential aspects: purpose, environment, and pricing. It could be more complete about potential limitations (max pages, file size), but current content is sufficient for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema coverage is 100%. The description adds value by explaining the conversion context, but no parameter-specific info is needed. Baseline score of 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Convert PDF pages to high-quality JPG images', specifying the source and result. It distinguishes from sibling tools like mio_jpg_to_pdf which does the opposite.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides billing and runtime context (browser, credits, day pass) which helps agents decide on invocation. However, it does not explicitly state when to use vs alternatives or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_to_textAInspect
PDF to Text — Extract text content from PDF files. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It mentions the tool 'runs in the browser' and billing, but fails to disclose critical behavioral traits like OCR support for scanned PDFs, file size limits, or output format. This is insufficient for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long, starting with a clear purpose statement. The billing information, while relevant, is somewhat verbose and could be condensed. Overall, it is efficient but not extremely tight.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters or output schema, the description adequately conveys purpose and billing context. However, it omits practical details such as output format, handling of encrypted PDFs, or limitations like page count, leaving gaps for a complete understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema description coverage is trivially 100%. Per guidelines, 0 parameters yield a baseline of 4. The description does not add parameter information because none exist, and it is not required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Extract text content from PDF files,' using a specific verb and resource. This distinguishes it from sibling tools like mio_pdf_to_doc or mio_pdf_to_jpg, which convert to different formats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this tool is for text extraction but provides no explicit guidance on when to use it versus alternatives. It focuses on billing details rather than usage context, leaving the agent to infer applicability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_to_xlsxBInspect
PDF to Excel — Convert PDF tables to editable Excel spreadsheets. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears full responsibility for behavioral disclosure. It adds that the tool runs in the browser and discusses billing, but does not mention key traits like input method, output format, limitations (e.g., table detection accuracy, page limits), or error handling. The billing context is helpful but insufficient for behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description front-loads the purpose with 'PDF to Excel — Convert PDF tables to editable Excel spreadsheets.' However, it then spends considerable text on credit and pricing details, which could be more succinct. While not overly long, the billing information could be separated or summarized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the empty input schema and no output schema, the description must provide sufficient context for tool invocation. It covers the conversion purpose and billing, but omits critical details such as how to provide the PDF file (likely via parameter, but schema shows none), expected output format, and any constraints. Thus, it leaves significant gaps for an agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema is empty (0 parameters), so schema description coverage is 100%. The description does not add parameter semantics because there are none to add. Following the guidelines, baseline 4 is appropriate as the description is not needed to compensate for missing parameter info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts PDF tables to editable Excel spreadsheets, using a specific verb ('Convert') and resource ('PDF tables to Excel'). It distinguishes itself from other PDF conversion tools like mio_pdf_to_doc or mio_pdf_to_text by focusing on table extraction, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions it runs in the browser and discusses credit coverage, which implies online usage. However, it does not explicitly state when to use this tool vs alternatives such as mio_xlsx_to_pdf (reverse operation) or when not to use it. No comparative guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_txt_to_pdfAInspect
TXT to PDF — Convert text files to PDF format. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It only mentions 'Runs in the browser', but fails to disclose how input is provided (e.g., file upload), whether there are size limits, or any synchronous behavior. The 0-parameter schema suggests input must be provided via some external mechanism, which is not explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, with the first stating purpose. However, the remaining two sentences contain billing information that may be better placed in a separate pricing tool, reducing conciseness for the tool's core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0 parameters and no output schema, the description is expected to explain input mechanism and output format. It only mentions conversion and browser execution, leaving out how the agent supplies the text file, which is critical for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so the schema coverage is 100%. The description adds no parameter info, but baseline for 0 parameters is 4. No compensation needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Convert text files to PDF format', which is a specific verb+resource combination. It distinguishes from sibling PDF conversion tools like mio_pdf_image_to_pdf or mio_pdf_from_word.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly states the use case (convert text files to PDF). While it does not explicitly exclude other formats or mention alternatives, the name and purpose are sufficiently unambiguous for an agent to infer when to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_unlockCInspect
Unlock PDF — Remove password protection from PDF files. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description must fully disclose behavioral traits. It states the tool removes password protection but does not indicate whether it modifies the original file, creates a new file, or has any side effects. The pricing information is not behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, which is reasonably concise, but includes unnecessary pricing details that could be omitted or moved. It could be more focused on the tool's core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is incomplete for a zero-parameter tool. It does not explain the output or return value, nor does it differentiate from sibling tools beyond the basic purpose. An agent would lack information on how to integrate this tool into a workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description should explain how the tool receives input (e.g., via file picker or session). It fails to do so, leaving the agent without guidance on how to invoke the tool correctly. The description adds no value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Unlock PDF — Remove password protection from PDF files,' providing a specific verb and resource. This distinguishes it from sibling tools like mio_pdf_compress or mio_pdf_edit, which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions it runs in the browser but does not explain how to invoke the tool, such as how to provide the PDF file. It lacks guidance on when to use it versus other PDF tools and does not mention any prerequisites or required user actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_unzipCInspect
Extract Archive — Extract files from ZIP, RAR, 7z, TAR, GZ archives. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description must disclose behaviors. It mentions running in browser and credit usage, but lacks details on input expectations, output format, size limits, or whether files are stored. The actual extraction behavior is inferred, not explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The core function is in the first sentence, but the rest consists of pricing information that could be condensed. While not excessively long, it includes extraneous details for tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite 0 parameters and no output schema, the description fails to cover basic context: how the tool receives input, whether it returns extracted files or a download link, or typical use cases. The agent lacks key information for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters, the schema provides no constraints. The description does not explain how the agent should provide an archive file (e.g., via file upload). A baseline of 4 is not met because the description adds no meaningful semantic context beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Extract files from ZIP, RAR, 7z, TAR, GZ archives' with a clear verb+resource structure. Though the name includes 'pdf', the description distinguishes it as an archive extractor, separate from other PDF tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, such as similar archive extractors or other mio_pdf tools. The description focuses on credit plans rather than usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_watermarkAInspect
Watermark PDF — Add text watermark to PDF pages. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description adds some behavioral context (runs in browser, billing info) but omits important details like whether it modifies original or creates new file.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise, front-loaded with purpose, then execution location and pricing. No fluff, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0 parameters and no output schema, description covers basic functionality and pricing but fails to explain how to specify the watermark text or customization options.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters; baseline 4 applies. Description adds no parameter info, but none needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Add text watermark' and resource 'PDF pages', distinguishing it from other PDF tools like merge, split, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description lacks explicit guidance on when to use vs alternatives; only implies usage through purpose. No exclusions or when-not-to-use mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_pdf_zipAInspect
Create ZIP — Compress files into a ZIP archive. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries the full burden. It mentions 'Runs in the browser' and discusses credit coverage, but does not detail behavioral traits such as whether original files are modified, what the output is, or any side effects. This is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in a single line, then adds relevant business context (credits, day pass, workspace unlock). Every sentence provides some value, though the pricing details could be considered secondary. It is appropriately sized for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description covers the basic operation and credit system. However, it does not explain how files are selected for compression (e.g., implicit from context or UI) or provide any details about the output ZIP. It is somewhat incomplete for an MCP tool that may need to know about input sources.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema description coverage is trivially 100%. Per the baseline rule for 0 parameters, the description need not add parameter-level meaning, and it does not. A score of 4 reflects that no additional param info is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Create ZIP — Compress files into a ZIP archive,' which clearly states the verb and resource. The name 'mio_pdf_zip' and sibling 'mio_pdf_unzip' provide differentiation: one compresses, the other decompresses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool versus alternatives. It notes that the tool runs in the browser and is covered by credits, but it does not provide context where other tools might be preferred. No disclaimers or exclusions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_scanner_barcodeBInspect
Barcode Scanner — Scan and decode barcodes (EAN, UPC, Code 128, etc.) from camera or images. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits. It mentions 'Runs in the browser' but omits crucial details like whether it requires user interaction, real-time processing, file upload support, or data storage policies. The pricing info does not compensate for missing behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at three sentences, front-loaded with purpose. The second and third sentences cover pricing, which is relevant but could be considered extraneous. Overall, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description is incomplete. It does not explain expected output (e.g., decoded text), input details (image file vs camera stream), or any constraints. The pricing info, while useful, does not fill the completeness gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, so the manual baseline is 4. The description does not add parameter info, but with zero parameters there is nothing additional to convey. Schema coverage is 100% by default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Scan and decode barcodes (EAN, UPC, Code 128, etc.) from camera or images.' It uses a specific verb (scan/decode) and resource (barcodes), and distinguishes itself from sibling tools like QR scanner or document scanner.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides pricing and credit details but does not offer guidance on when to use this tool versus alternatives like mio_scanner_qr or mio_scanner_document. It lacks usage context, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_scanner_batchAInspect
Batch Scanner — Rapid multi-page scanning with continuous camera mode for high-volume documents. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states 'Runs in the browser' but fails to disclose output format, page limits, or what happens to scans. Pricing info is given but not behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, covering purpose and pricing. Slightly verbose with pricing details but not excessive. Could be trimmed without losing meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no parameters or output schema, the description should provide more context about the tool's behavior and result. Lacks details on output format (PDF? images?), limits, or post-scan actions. Adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so baseline is 4. The description correctly omits parameter details since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Rapid multi-page scanning with continuous camera mode for high-volume documents,' providing a specific verb+resource and distinguishing it from single-page scanner siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions browser execution and pricing coverage (signup credits, Day Pass), but does not explicitly guide when to use batch vs single-page scanners. Implies high-volume use case but lacks clear directives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_scanner_bookAInspect
Book Scanner — Scan book pages with dual-page detection and automatic page splitting. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must carry full burden. It mentions browser-based execution and credit coverage but lacks details on output format, saving behavior, or result handling. Behavioral traits like what happens after scanning are not disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with three sentences, each adding value: purpose, browser and billing, and credit unlock details. No fluff, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description should provide complete context for invocation and results. It covers billing and browser but misses output format, how to retrieve scans, and what the agent should expect after calling the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters with 100% schema coverage. The description does not need to elaborate on parameters. It adds context about dual-page detection, which is a feature, not a parameter. Baseline 4 for zero parameters is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool scans book pages with dual-page detection and automatic page splitting, specifying the resource (book pages) and distinguishing it from other scanner tools like document or barcode scanners.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for scanning books but does not explicitly guide when to use this tool versus other scanner variants like mio_scanner_document or mio_scanner_batch. No when-not or alternative guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_scanner_documentBInspect
Document Scanner — Scan documents with auto edge detection, perspective correction & enhancement. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must fully disclose behavior. It notes the tool runs in the browser, which is helpful, but fails to describe what happens after scanning (output format, download behavior, or any side effects). This leaves significant gaps for an AI agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the tool's core function, but the credit explanation is verbose and could be more concise. It still communicates essential information without being overly long.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description should clarify output format, download behavior, and how the agent should invoke it (e.g., guide user to scan). It only covers function and credits, leaving the operational context incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so baseline is 4. The description adds value by stating 'Runs in the browser,' implying no file parameter is needed. However, it does not explain how the document is captured beyond that.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Document Scanner — Scan documents with auto edge detection, perspective correction & enhancement.' It clearly identifies the tool as a document scanner with specific capabilities, distinguishing it from sibling scanners like barcode, receipt, or ID card scanners.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions coverage by signup credits and Day Pass, providing context on when the tool is accessible. However, it does not explicitly state when not to use it or compare it to other scanning tools, leaving usage guidance somewhat vague beyond pricing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_scanner_handwritingAInspect
Handwriting to Text — Scan handwritten notes and convert to editable text using OCR. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description adds context by noting it runs in the browser and explaining credit/pricing details. It does not disclose any destructive behavior, auth needs, or rate limits, but the operation is inherently non-destructive and the pricing info is helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at about 60 words, starting with a clear purpose. Some credit/pricing details may be slightly verbose for a tool description, but they provide necessary context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lacks explicit details about output format, supported languages, or quality limitations. Since there is no output schema, the description should compensate, but it only says 'convert to editable text' without specifics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so baseline is 4. The description adds no parameter info, but there are none to describe. Schema coverage is 100% naturally.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it scans handwritten notes and converts to editable text using OCR. It includes a title-like header 'Handwriting to Text' and distinguishes from sibling tools like document or barcode scanners by specifying the resource and purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this tool is for handwriting-to-text conversion, making usage clear. However, it does not explicitly mention when not to use it or provide alternative tools, though sibling names make alternatives obvious.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_scanner_id_cardBInspect
ID Card Scanner — Scan ID cards, passports & licenses with fixed aspect ratio and front+back layout. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description only mentions that the tool runs in the browser and its credit coverage. It omits details about side effects, data handling, or auth requirements, which are critical for a scanning tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences; the first defines the core purpose, but the remaining three focus heavily on billing, which is partially redundant (pricing link). It could be more concise without losing critical billing context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description covers the scanning type and layout. However, it lacks information about output format (image/PDF) and any limitations on document size or quality, leaving some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so the description naturally adds value by specifying what documents are scanned and the layout. The baseline for zero-parameter tools is 4, and the description provides relevant context beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states that the tool scans ID cards, passports, and licenses with fixed aspect ratio and front+back layout, clearly distinguishing it from sibling scanner tools like document, barcode, or receipt scanners.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives among the many scanner siblings. It covers billing information but does not clarify criteria for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_scanner_photo_to_pdfCInspect
Photo to PDF — Convert photos to PDF with multi-image batch support and minimal processing. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility for behavioral disclosure. It mentions the tool runs in the browser and minimal processing, but does not address safety (e.g., destructiveness), permissions, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core function but then includes verbose billing info that may be unnecessary for tool selection. It could be more concise by trimming credit/pricing details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description should explain usage context (e.g., input file types, how to provide photos) but omits these. It assumes user knowledge about browser-based processing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, and schema description coverage is 100%, so baseline 3 is appropriate. The description adds no parameter-specific meaning, but the lack of parameters is clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts photos to PDF with multi-image batch support, specifying the resource and action. However, it does not explicitly differentiate from similar siblings like 'mio_jpg_to_pdf' or 'mio_pdf_image_to_pdf', relying on the scanner category context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It focuses on billing and policy details (credits, passes) rather than usage context or selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_scanner_qrAInspect
QR Code Scanner — Scan and decode QR codes from camera or images instantly. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description reveals that the tool runs in the browser and has credit-based access. It does not disclose data handling or privacy implications, but these are not critical for a simple scanner tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, front-loads the main function, and each sentence adds value without unnecessary fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no parameters and no output schema, the description adequately covers purpose, runtime environment, and pricing context. It is complete for its simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to explain them. Baseline 4 applies as per guidelines.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Scan and decode QR codes from camera or images instantly.' The title 'QR Code Scanner' distinguishes it from sibling scanner tools like mio_scanner_barcode and mio_scanner_document.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions that it runs in the browser and is covered by credits or a Day Pass, providing context for when to use. However, it does not explicitly state when not to use it or compare with alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_scanner_receiptAInspect
Receipt Scanner — Scan receipts with tight auto-crop optimized for small documents. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full burden. It discloses that the tool runs in the browser and provides credit/payment details, but lacks information on side effects, file types, or expected input format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise, front-loading the core scanning function and then adding relevant credit details. Each sentence adds value, though the credit info could be more succinct.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose and browser execution, but fails to mention output format (e.g., PDF or image) or accepted file types. Given no output schema, this omission leaves the tool's behavior incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters (100% coverage trivially). The description does not explain how to provide the receipt (e.g., file upload, image paste), which is a significant gap for a scanner tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool scans receipts with tight auto-crop optimized for small documents. It distinguishes from siblings like scanner_document and scanner_barcode by specifying 'receipts' and the specialized auto-crop feature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for receipts but does not explicitly compare with alternatives or state when not to use. No guidance on choosing this over other scanner tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_scanner_whiteboardAInspect
Whiteboard Scanner — Scan whiteboards with high contrast, color boost & white balance correction. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses it runs in browser and applies image enhancements, but does not describe output format or behavior beyond that. No annotations to supplement, so carries more burden.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with core function in first sentence, then additional credit info. Every sentence adds value, no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, basic behavior, and pricing. Lacks output description, but for a zero-param tool with low complexity, it is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so baseline is 4. Description adds credit/pricing context, which is extra but not needed for parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it scans whiteboards with specific enhancements (high contrast, color boost, white balance correction). The name and description distinguish it from sibling scanner tools like document, barcode, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to compared to other scanners. The name implies whiteboards, but siblings are numerous and similar, so more guidance would help.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_aac_to_mp3BInspect
AAC to MP3 — Convert AAC audio to MP3 format. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must bear full transparency. It mentions browser-based processing and higher credit usage, but fails to disclose key behaviors like file input mechanism, size limits, output format details, or whether it is synchronous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph with the core purpose upfront. However, it includes lengthy credit/pricing details that could be condensed or moved to a separate note, reducing clarity for the primary conversion task.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should explain return values (e.g., 'returns an MP3 file'). It omits input specification, output format, and constraints (file size, supported AAC variants). The focus on pricing does not compensate for operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, but the description does not clarify how input is provided (e.g., file upload). It only states the conversion direction without explaining how to invoke the tool. Despite schema coverage being 100%, the description adds no parameter meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'AAC to MP3 — Convert AAC audio to MP3 format,' clearly stating the verb (convert) and resource (AAC audio to MP3). It distinguishes from many sibling tools with similar names by specifying the input format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when AAC to MP3 conversion is needed, and mentions credit constraints (welcome credits cover limited runs, Day Pass excludes this workspace) which guides cost-conscious use. However, it does not explicitly suggest alternatives among siblings or state prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_add_audioCInspect
Add Audio to Video — Replace or add audio track to a video file. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosing behavior. It mentions it processes in the browser and uses credits, but does not explain side effects (e.g., whether original file is modified), required inputs, or output format. The pricing details are not behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose, but the latter half is dominated by pricing minutiae that could be summarized or linked. It is longer than necessary for a tool with no parameters, reducing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the empty input schema and no output schema, the description should explain how to invoke the tool and what to expect. It fails to do so, focusing solely on credits. The agent cannot determine how to provide the video and audio files.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, which is unusual for a tool that requires a video and audio file. The description does not clarify how inputs are provided (e.g., via selection or upload). The baseline of 4 for zero parameters is not met because the description adds no meaningful input guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool adds or replaces an audio track in a video file. However, it does not differentiate from sibling tools like mute or normalize audio, which have similar purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides extensive pricing and credit information but offers no guidance on when to use this tool versus alternatives like mio_video_mute or audio editing tools. It does not explain use cases or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_auto_captionsAInspect
Auto Captions — Automatically add subtitles to video using AI speech recognition. AI Studio run — dispatches to our AI workers (Modal). Credits per run vary by model and file size. Day Pass and welcome credits do not include AI Studio. Files auto-delete within 24 hours; retention is auditable at mioffice.ai/account/tasks. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that it dispatches to AI workers (Modal), credits vary by model/file size, files auto-delete within 24 hours, and retention is auditable. This provides substantial behavioral context beyond the name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph of about six sentences, front-loaded with the purpose, then efficiently covering usage details, restrictions, and pricing. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters, no output schema, and no annotations, the description covers the main purpose, usage restrictions, and behavioral traits. However, it does not specify how the video input is provided (e.g., via a previous tool or context) or the output format, leaving a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters (empty properties), so there is nothing to document. The description does not need to add parameter info. With no parameters, baseline is 4, and the description does not detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Automatically add subtitles to video using AI speech recognition', which is a specific verb+resource. It distinguishes itself from siblings like 'mio_ai_transcriber' and 'mio_ai_video_subtitler' by focusing on auto-captions via AI speech recognition.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use (auto-captions) and important restrictions such as 'Day Pass and welcome credits do not include AI Studio' and 'Files auto-delete within 24 hours'. However, it does not explicitly list alternative tools or when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_avi_to_mp4AInspect
AVI to MP4 — Convert AVI videos to MP4 format. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description partially discloses behavior by stating it processes in the browser and uses credits, but it lacks details on conversion specifics such as file size limits, supported codecs, or output handling. It does not contradict annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and each sentence adds value regarding credits and pricing. It is slightly verbose but not excessively so, and the structure is logical.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is incomplete in that it does not explain how the input AVI file is provided (no parameters) or where the output MP4 is saved. Given no output schema, the agent lacks critical information to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the baseline is 4. The description does not add parameter meaning, but it is not required as there are no parameters to document.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts AVI videos to MP4 format with a specific verb and resource, and it distinguishes from sibling tools like mio_video_mkv_to_mp4 and mio_video_mov_to_mp4 by specifying the input format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about credit usage, workspace restrictions, and pricing, helping the agent understand when to use the tool. It mentions that Day Pass does not include this workspace and that credits are limited, but does not explicitly exclude other formats or provide alternative tool recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_colorBInspect
Color Grade Video — Adjust brightness, contrast, saturation, gamma, and hue. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description reveals that the tool runs in the browser and consumes credits, but it omits critical behavioral details such as whether the input video is modified in-place or a new file is created, and how the input video is provided (since the schema has no parameters). This lack of clarity reduces transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, but includes lengthy pricing and credit details that could be condensed or referenced elsewhere. While not overly verbose, the additional information detracts from conciseness for an agent focused on functional behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no parameters and no output schema, the description should fully explain how to invoke the tool and what to expect. It fails to clarify how the video is selected, whether it is pre-loaded from context, and what the output format or result is. This leaves significant gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite having no parameters in the schema, the description enriches understanding by listing the adjustable attributes (brightness, contrast, etc.), which effectively describes the internal controls. This adds meaning beyond the empty schema, making the tool's capabilities clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'Color Grade Video' and immediately lists specific adjustments: brightness, contrast, saturation, gamma, and hue. This clearly identifies the tool's purpose as a color grading tool for videos, distinguishing it from the many other video processing tools in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like video_enhancer or video_denoise. It does not mention any prerequisites or exclusions. The pricing information is present but does not help in deciding usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_compressBInspect
Compress Video — Reduce video file size. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It notes credit usage and workspace restrictions but omits critical details like supported input formats, output quality, lossy vs lossless behavior, or any side effects. The agent lacks key behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose in the first sentence. However, it includes several sentences about pricing and credits which, while potentially useful, add length. The overall structure is clear but could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no parameters, the description should comprehensively explain the tool's operation. It fails to cover input formats, output characteristics, quality settings, or any limitations. The pricing info is not enough to make the agent fully informed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema coverage is 100%. Per guidelines, baseline is 3. The description adds no parameter-related information, as there are none, but also does not explain how the tool receives the video input, which could be a gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Compress Video — Reduce video file size.' The verb 'compress' and resource 'video' are specific, and among siblings, there are no other video compress tools, so it is well-distinguished.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool over alternatives or when not to use it. It mentions credit usage and pricing but does not help an agent decide between this and other compression tools like 'mio_image_compress' or 'mio_pdf_compress'. Usage is implied by the name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_converterAInspect
Video Converter — Convert between MP4 and WebM with quality control. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that processing happens in the browser and credit usage relative to other workspaces, but lacks details on conversion behavior, limitations, or what 'quality control' entails. With no annotations, description should provide more technical clarity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is front-loaded with purpose, then provides necessary credit context. It is somewhat lengthy but each sentence adds value. Structure is logical, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers credit and workspace context well, but is incomplete for tool differentiation. Lacks explanation of how quality is controlled, file size limits, or comparison to sibling converters. With many siblings, more specific guidance is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters (100% coverage by default). Baseline for 0 parameters is 4. Description adds no parameter details, but none are needed. The claim of 'quality control' is vague but does not contradict the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts between MP4 and WebM with quality control. This differentiates it from specific one-way converters like mio_video_webm_to_mp4, showing a distinct bidirectional purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context about credits and workspace inclusion (e.g., not in Day Pass), but does not guide when to use this vs. the many sibling format-specific converters. Usage context is implied but not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_cropCInspect
Crop Video — Crop video to custom dimensions. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden for behavioral disclosure. It only mentions browser processing and credit usage, but fails to disclose critical traits like output format, side effects on input, or whether it's synchronous. The description is almost entirely about pricing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is verbose with extraneous pricing and workspace details that do not aid in tool usage. The first sentence is concise but the remaining content is irrelevant for invocation, making it poorly structured for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description is critically incomplete. It does not explain how to specify crop dimensions, what the output is, or any prerequisites. The pricing information does not compensate for the lack of functional details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, yet the description claims cropping to custom dimensions. No mechanism for specifying dimensions is provided. The description adds no meaning beyond the empty schema, leaving the agent without necessary param guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Crop Video — Crop video to custom dimensions' which clearly indicates the action (crop) and resource (video). It is specific enough to distinguish from siblings like resize or trim, though it could be more precise about the nature of cropping.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks guidance on when to use this tool versus alternatives. It does not mention any criteria for choosing crop over resize, trim, or other video editing tools. The focus is on pricing and credits, not usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_denoiseBInspect
Video Denoise — Remove grain and noise from video footage. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description must fully disclose behavior. It mentions browser processing and credit cost but fails to describe denoising strength, output format, or any side effects. The pricing details are not behavioral traits of the tool itself.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description starts well with the core function but then includes extensive pricing and workspace information that is not essential for understanding how to use the tool. It could be more concise by moving pricing details elsewhere.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers pricing and workspace access, which is useful context, but it does not explain how input is provided (e.g., current video file) or what output to expect. Without an output schema, the return format is unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters and schema coverage is 100% (vacuously). The description adds no parameter information, but the baseline for 0 parameters is 4. The schema already fully describes the parameter set.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Remove grain and noise from video footage' which clearly identifies the tool's core function. However, it does not distinguish it from similar video enhancement tools like mio_ai_video_enhancer or the sibling audio denoise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., audio denoise or other video filters). It focuses on pricing and credit usage rather than usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_extract_subtitlesAInspect
Extract Subtitles — Extract embedded subtitles from MKV/MP4 video. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that processing occurs in the browser, consumes credits, and explains workspace pricing details. However, it does not describe the output format or any side effects, which leaves some ambiguity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded with the core purpose. The additional pricing details are relevant but slightly expand the length. Every sentence serves a purpose, though some could be consolidated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lacks information about the tool's return value (e.g., subtitle file format). Given no output schema, this is a gap. It also does not explain how the user provides the video file. Credit and workspace details are well covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema coverage is 100%. Per guidelines, a baseline of 4 applies for zero parameters. The description naturally adds no parameter semantics, which is acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Extract embedded subtitles from MKV/MP4 video,' specifying the verb, resource, and file formats. It differentiates from sibling tools like mio_ai_video_subtitler (which adds subtitles) and mio_ai_transcriber (speech-to-text).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about credit usage and pricing but lacks explicit guidance on when to use this tool versus alternatives. It does not mention typical use cases or prerequisites like having a video file with embedded subtitles.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_fadeCInspect
Video Fade — Add fade-in and fade-out transitions to video. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description only mentions it runs in the browser and consumes credits, but lacks details on behavioral traits like whether it modifies the original video, output format, or required input.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is verbose, mixing purpose with extensive credit/pricing details that are not essential for understanding how to use the tool. Could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no parameters or output schema, so description should explain what the tool expects and produces. It only states the effect (fade) and credit restrictions, leaving important context (e.g., input video source, output location) unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so description need not add parameter meaning. Baseline for 0 params is 4; no additional info is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'add' and resource 'fade-in and fade-out transitions to video,' but lacks differentiation from sibling tools. However, among many video tools, 'fade' is unique, so clarity is good.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Description focuses on credit/workspace details rather than usage context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_flac_to_mp3BInspect
FLAC to MP3 — Convert FLAC audio to MP3 format. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavior. It only mentions in-browser processing and credit usage, but omits critical details like file size limits, supported sample rates, or output quality. The credit discussion does not constitute behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The purpose is front-loaded, but about half the description covers credit and pricing information, which is tangential to tool selection. It could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless conversion tool, the description covers the basic purpose and mentions the workspace. However, it lacks technical details such as output quality, input requirements, or any limitations, which would help an agent invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description is not required to add parameter details. The baseline is 4, and the description does not need further explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts FLAC to MP3 and mentions the workspace. However, it does not differentiate from sibling tools like mio_video_aac_to_mp3 or mio_audio_converter, which also convert audio formats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description focuses on credit costs rather than providing usage context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_flipCInspect
Flip Video — Mirror video horizontally or vertically. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden. It mentions browser processing and credit consumption, but fails to disclose whether the operation is destructive, reversible, or requires specific permissions. Key behavioral aspects are omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph with extraneous pricing details. The first sentence is concise, but subsequent sentences on credits and plans add length without aiding tool usage. It could be more focused.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, output schema, or annotations, the description should fully explain tool behavior. It covers the basic action but omits how to specify flip direction, expected input format, and output details, leaving gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and schema coverage is 100% trivially. Baseline for 0 parameters is 4. The description adds no parameter info, which is acceptable given the schema covers everything.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool flips video horizontally or vertically. It uses a specific verb and resource, distinguishing it from siblings like rotate or crop. However, it does not explain how to specify the flip direction, which may leave ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description focuses on credit pricing and workspace tiers but lacks guidance on when to use this tool versus alternatives (e.g., rotate, crop). No explicit when-not-to-use or alternative tool references are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_gif_to_mp4AInspect
GIF to MP4 — Convert animated GIF to MP4 video. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses browser processing and credit consumption, but lacks details on safety or side effects beyond credits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Reasonably concise, front-loads main purpose, though the credit/pricing details could be slightly trimmed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers main functionality, run environment, and pricing, but does not describe output format or return behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema; description correctly abstains from adding param info. Baseline 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Convert' and resource 'animated GIF to MP4 video', distinguishing it from siblings like 'mio_video_to_gif' which does the reverse.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context about credit usage and workspace restrictions but does not explicitly compare with alternatives or state when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_hdr_to_sdrAInspect
HDR to SDR — Convert HDR video to SDR for universal playback. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided. The description discloses that it processes in-browser and uses more credits than other workspaces, but it does not clarify whether the conversion is destructive, what happens to the original, or any quality implications. Essential behavioral traits are partially covered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is verbose, with five sentences. The first sentence is crisp, but the remaining four focus on pricing and credits, which could be condensed. The structure front-loads the core purpose but then digresses into secondary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters, the description fails to explain how the agent should specify the input video (e.g., no file parameter). It also lacks details about output format, supported codecs, or limitations. The description is incomplete for an agent to invoke the tool correctly without assuming external context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so description cannot add meaning beyond the schema. Schema coverage is 100% (nonexistent params). A baseline of 4 is appropriate since no additional parameter information is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Convert HDR video to SDR for universal playback.' The verb 'convert' and resource 'HDR video to SDR' specify exactly what the tool does. Among siblings like mio_video_converter or mio_video_color, this tool is uniquely identified by the HDR-specific transformation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about credit costs and workspace restrictions, which helps an agent decide if the user can afford to use it. However, it does not explicitly compare to alternative tools (e.g., when to use this versus mio_video_converter) or state prerequisites. The usage guidance is implicit but not direct.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_loopBInspect
Loop Video — Repeat video multiple times to create a looping clip. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description adds some behavioral context: runs in browser, uses credits, and is part of credit-based system. However, it does not disclose what happens to the original video or the output format, leaving gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, but then includes lengthy pricing details that could be relocated to a pricing tool or global context, reducing clarity for the tool's primary function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description lacks details on the number of loops, input requirements, or output format. It is incomplete for an agent to know what to expect or how to invoke the tool without additional context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema description coverage is 100% (trivially). The description mentions 'multiple times' but does not specify how to control this, so no additional semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Loop Video — Repeat video multiple times to create a looping clip.' This verb+resource combination is specific and distinguishes it from other video tools like trim or merge.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context about credit usage and workspace restrictions, but does not specify when to use this tool over alternatives or provide when-not scenarios. It implies usage through credit availability but lacks explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_m4a_to_mp3AInspect
M4A to MP3 — Convert M4A audio files to MP3 format. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It mentions browser processing and credit consumption, which are useful. However, it does not disclose whether the conversion is non-destructive (likely safe), what happens to the source file, or the output format details. More could be said about side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: two sentences defining purpose, then a few sentences about credits and access. Every sentence adds value. It is front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and no annotations, the description is fairly complete. It explains the conversion, credit implications, and directs to pricing. It could elaborate on output handling (e.g., download link) but is adequate for a straightforward tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters (empty properties), and schema description coverage is 100%. The description adds no parameter info because none exist. For a parameterless tool, baseline is 4 given the description clearly states the conversion operation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'M4A to MP3 — Convert M4A audio files to MP3 format.' This provides a clear, specific verb and resource. It distinguishes from sibling converters by explicitly naming the source and target formats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains credit usage, that Day Pass excludes this workspace, and points to pricing for details. It implicitly tells the agent when to use (when M4A to MP3 conversion is needed) and provides cost/access context. It does not explicitly list alternative tools for similar conversions but is adequate for a simple converter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_mergeBInspect
Merge Videos — Combine multiple videos into one file. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description mentions in-browser processing and credit usage but does not disclose how videos are selected or what happens to input files. For a merge tool, this is insufficient transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Core purpose is front-loaded, but the description includes a lengthy pricing section that could be condensed. Every sentence adds some value, but it's longer than needed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no parameters, the description should explain how to supply videos and what to expect. It lacks details on input method, supported formats, and output behavior, focusing excessively on pricing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so baseline is 4. Description adds no param details but that's unnecessary as schema is fully covered.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Merge Videos — Combine multiple videos into one file.' This is a specific verb and resource, differentiating it from other merge tools like pdf_merge and other video tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides credit and workspace context (Video And Audio Studio, credits per run, Day Pass exclusion) but does not explicitly state when to use this tool over sibling video tools like mio_video_compress or mio_video_trim.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_mkv_to_mp4BInspect
MKV to MP4 — Convert MKV videos to MP4 format. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions browser processing and credit cost but does not disclose behavioral traits like permissions, rate limits, or whether the original file is modified. Important safety information is missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficient at 5 sentences. The first sentence delivers the core purpose. Additional sentences about credits and pricing provide useful context without being overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple conversion tool with zero parameters and no output schema, the description covers purpose, processing mode, and credit implications. It is sufficient for an agent to decide to use it, though it omits details about subtitle handling or codec defaults.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no properties, so schema description coverage is effectively 100%. The description cannot add parameter information beyond the schema. A baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts MKV to MP4 ('Convert MKV videos to MP4 format'). This is a specific verb+resource. While it does not explicitly differentiate from sibling video conversion tools like mio_video_avi_to_mp4, the name and first line imply the format scope, making it clear enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about credit usage and pricing but does not give guidance on when to use this tool versus alternatives (e.g., mio_video_converter). It lacks explicit when-to-use/when-not-to-use instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_mov_to_mp4BInspect
MOV to MP4 — Convert MOV videos to MP4 format. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It mentions that the conversion 'processes in the browser' and discusses credits, but it does not describe potential side effects, limitations on input file size or codecs, whether the original file is retained or deleted, or any other behavioral nuances. This is insufficient for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long. It front-loads the purpose but then includes detailed credit and pricing information that could be considered extraneous for a tool's core description. This extra content, while informative, reduces conciseness. A more streamlined description would improve this score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, no output schema, and no annotations, the description covers the basic conversion function and credit context. However, it lacks details on how to provide the input file (e.g., upload via a prior tool), any output file handling, or prerequisites. This leaves some gaps for an agent to understand the complete workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema description coverage is 100% (trivially). The description adds no parameter information, which is acceptable because there are none. Per the guidelines, 0 params yields a baseline of 4. The description does not detract from this baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'MOV to MP4 — Convert MOV videos to MP4 format.' It specifies the verb and resources (MOV to MP4). However, it does not distinguish this from sibling tools like mio_video_converter or other format-specific converters, so it lacks explicit differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context about credit usage, stating it uses more credits than Document/Image/Scanner workspaces and that Day Pass does not include this workspace. It also mentions credit packs. However, it does not give explicit when-to-use or when-not-to-use guidance relative to alternative video conversion tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_mp3_cutterCInspect
MP3 Cutter — Trim and cut MP3 audio files. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It states 'processes in the browser' and credit cost, but fails to disclose essential behaviors: how input is provided (no parameters in schema), output format, file size limits, or whether the operation is destructive. This is insufficient for an agent to understand side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The first sentence concisely states the purpose, but over half of the description is devoted to credit and pricing information, which is extraneous and better suited to a separate pricing tool. This dilutes the message and makes the description unnecessarily long.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description should fully explain how to invoke it and what to expect. It fails to do so: no mention of required input file, output, or usage steps. Pricing info is irrelevant for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters (schema coverage 100% of an empty object). The description adds no explanation of how the tool receives input or what parameters are needed. For a trimming tool, this is a critical omission. The description should clarify expected input, but it does not.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'MP3 Cutter — Trim and cut MP3 audio files', which clearly states the tool's purpose (trimming MP3 files). This distinguishes it from many sibling tools like mio_video_trim or mio_video_to_mp3. However, the subsequent pricing details dilute the clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs. alternatives (e.g., other audio editors or trimming tools). It only mentions credit usage and pricing, which is not usage guidance. The description does not help an agent decide between this and similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_muteBInspect
Remove Audio from Video — Remove the audio track from a video, keeping video only. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility for behavioral disclosure. It mentions processing in-browser and credit costs, but does not describe side effects (e.g., whether the original file is modified, output format, or if the operation is reversible). Critical behavioral traits are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description includes relevant but somewhat extraneous information about credits and pricing, making it longer than necessary for a no-parameter tool. It could be more concise by front-loading the core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description should still explain how the tool is invoked (e.g., provide a video file) and what the output is. It lacks these details, making it incomplete for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters (100% coverage, empty), so the description is not required to explain parameters. The baseline for 0 parameters is 4. The description adds no parameter information, but that is acceptable since there are none.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Remove Audio from Video' which directly explains what the tool does. The verb 'Remove' and resource 'Audio from Video' are specific, and the tool name 'mio_video_mute' aligns. It distinguishes from siblings like 'mio_video_add_audio' which adds audio instead.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for removing audio, but does not explicitly state when to use this tool over alternatives (e.g., other video audio tools). It mentions credit usage and pricing, but not contextual guidance like 'Use this when you need a silent video without any audio track.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_normalize_audioCInspect
Normalize Audio — Normalize audio volume to standard levels. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility. It mentions browser processing and credit usage but does not disclose whether the operation is destructive, what happens to the original file, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is adequate but includes extraneous details about credits and pricing that distract from the core function. It could be more concise by front-loading the purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-param, no-output-schema tool, the description should explain implicit inputs and outputs. It fails to mention what video/audio is normalized, the output format, or how to use the result. Context is incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the description could clarify what input the tool acts on (e.g., current file in workspace). It does not, leaving ambiguity about how the tool is invoked.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Normalize audio volume to standard levels', which is a specific verb+resource. However, it does not explicitly distinguish this tool from siblings like mio_audio_denoise or mio_audio_equalizer, though the normalization action is fairly unique.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks guidance on when to use this tool versus alternatives. It provides credit and pricing context but does not explain prerequisite conditions or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_ogg_to_mp3BInspect
OGG to MP3 — Convert OGG Vorbis audio to MP3 format. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must bear full burden. It mentions browser processing and credit costs but fails to disclose key behavioral traits such as output handling, file size limits, synchrony, or whether the conversion is lossless. The billing info is useful but not behavioral.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences long, with the first sentence capturing purpose clearly. However, the subsequent sentences heavily focus on billing details (credits, Day Pass, pricing) that may not be essential for tool selection, making it less concise than ideal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description covers the conversion action and credit implications. However, it omits important operational details like output format specifics (bitrate, quality), file size limits, or how to retrieve the output, leaving gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no properties (0 parameters), and schema description coverage is 100%. The description adds no parameter information, but none is needed. However, it could have clarified how input is provided (e.g., file selection).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with 'OGG to MP3 — Convert OGG Vorbis audio to MP3 format,' which is a specific verb (convert) and resource (OGG Vorbis audio to MP3). The tool name itself differentiates it from siblings like mio_video_aac_to_mp3, ensuring clear purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool versus alternatives like other conversion tools. It only implies usage via the format name, but lacks guidance on when to prefer this tool or exclusions (e.g., not for other audio formats).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_resizeCInspect
Resize Video — Change video resolution with quality presets (480p to 1440p). Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It states the tool processes in the browser and uses credits, but fails to disclose key behaviors: how the target video is identified (no input parameters), how output is delivered, whether the original file is modified, or any constraints like supported input formats. The focus on pricing overshadows operational transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise but includes extraneous pricing details that could be omitted or placed elsewhere. The first line effectively states the purpose, but subsequent sentences about credit packs and subscriptions add length without aiding tool selection or invocation. The structure is acceptable but not optimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the zero-parameter schema and no output schema, the description should explain how the tool determines the input video and what the output format is. It only mentions resolution presets, leaving a significant gap. Compared to sibling resize tools (e.g., mio_video_resize_square) which might have similar ambiguity, but still incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters (100% coverage baseline 4). The description adds value by mentioning resolution presets (480p to 1440p), implying the tool uses implicit context to determine the video to resize. However, it does not explain how to specify the preset or what the default behavior is, so it partly compensates for the lack of parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Resize Video — Change video resolution with quality presets (480p to 1440p)', specifying the verb (resize), resource (video), and scope (resolution with presets). It is clear enough to understand the core function, though it does not explain how the resolution preset is selected given no input parameters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions credit usage and pricing model but provides no guidance on when to use this tool over sibling resize tools (e.g., mio_video_resize_square, mio_video_resize_reels). It does not differentiate use cases or state prerequisites, leaving the agent without clear selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_resize_reelsAInspect
Resize Video for Reels — Resize any video to Instagram Reels 9:16 vertical format (1080×1920). Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It mentions browser processing and credit usage, but does not disclose details about audio handling, output format, or potential side effects beyond dimensions. Some transparency but gaps remain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is front-loaded with main purpose, but includes pricing/credit details that could be streamlined. Generally efficient, though slightly verbose for the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides dimensions and credit context, but lacks details on input video requirements, output characteristics (e.g., codec, quality), and return value. With no output schema, more completeness would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters; schema coverage is 100% by default. Per guidelines, baseline is 4. Description does not need to add parameter info, but provides output dimensions which is relevant context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it resizes video to Instagram Reels vertical 9:16 format (1080×1920). The verb 'Resize' and specific format distinguish it from other resize siblings like shorts, tiktok, square.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context on credit usage and pricing, but does not explicitly state when to use this tool vs alternatives. It implies usage for Reels format, but lacks direct comparison or when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_resize_shortsAInspect
Resize Video for Shorts — Resize any video to YouTube Shorts 9:16 vertical format (1080×1920). Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It mentions browser processing and credit usage, but lacks details on destructive behavior, auth needs, or output handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description front-loads the core purpose, then provides credit/pricing details which are useful but slightly verbose. Could be trimmed without losing essentials.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose and credit context, but lacks return type info and input constraints (e.g., file size limits, supported formats). For a simple tool with no params, it's moderately complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, so schema coverage is 100%. No parameter info needed beyond schema; baseline 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it resizes videos to YouTube Shorts 9:16 vertical format (1080×1920), distinguishing it from sibling resize tools for reels, square, TikTok, and generic resize.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for YouTube Shorts but does not explicitly guide when to use this vs alternative resize tools, nor does it mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_resize_squareAInspect
Resize Video to Square — Resize any video to 1:1 square format (1080×1080) for Instagram and social media. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description clearly explains that the tool processes in the browser, uses credits, and notes credit differences between workspaces. It also mentions limitations (Day Pass exclusion, credit pack nature). However, it does not disclose whether the original file is preserved or the output format (e.g., MP4, AVI).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main purpose but includes extensive credit details that could be condensed. Each sentence adds value, though the pricing reference is not essential for tool invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description adequately covers the tool's outcome (square 1080x1080) and credit constraints. However, it does not specify how to provide the input video (e.g., file selection) or the output file format, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema coverage is trivially 100%. The description adds no parameter details, which is acceptable. Baseline for zero parameters is 4. No additional parameter semantics needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'Resize Video to Square — Resize any video to 1:1 square format (1080×1080) for Instagram and social media.' It clearly specifies the verb (resize), resource (video), and target format (square), differentiating from similar tools like mio_video_resize and platform-specific resize tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for square videos on social media but does not explicitly state when to prefer this tool over alternatives (e.g., mio_video_resize for custom sizes). No when-not-to-use or comparison with siblings is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_resize_tiktokAInspect
Resize Video for TikTok — Resize any video to TikTok 9:16 vertical format (1080×1920). Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses it uses more credits per run and that it's a browser process, but does not explain output behavior, file limits, or if the operation is destructive, which is needed since no annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise overall, with purpose stated first. Includes important credit context, but some pricing details (Day Pass, credit pack) may be extraneous for tool invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose and cost but lacks input file requirements (e.g., supported formats, size limits) and output details, leaving gaps for a complete invocation context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so schema coverage is trivially 100%. The description adds credit usage context but no parameter meaning is needed. Baseline 4 applies as per instructions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly states it resizes video to TikTok 9:16 vertical format (1080×1920), matching the tool name and differentiating it from other resize tools like resize_reels and resize_shorts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clearly specifies the tool is for TikTok format, guiding use cases. However, it does not explicitly exclude other scenarios or provide direct comparisons to siblings, missing some usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_reverseAInspect
Reverse Video — Play video backwards with reversed audio. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that processing occurs in the browser and that it uses more credits than other workspaces. However, it does not mention whether the operation is destructive, any required permissions, or the output format. The pricing/credit info adds some behavioral context but leaves gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is overly verbose, dedicating most of its length to pricing and workspace details that are tangential to the tool's operation. It could be shortened to one sentence about reversing video and audio, making it less efficient for an AI agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no parameters and no output schema, the description should explain input mechanism and output. It fails to mention how the video is provided (e.g., file selection from context) or what the result is (e.g., a reversed video file). The description is incomplete for an agent to understand the full workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, so schema coverage is effectively 100%. Per guidelines, baseline is 4. The description does not need to add parameter semantics, and it correctly avoids misleading parameter information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it reverses video with reversed audio, using specific verbs (Reverse, Play backwards) and resource (video). It distinguishes itself from siblings like mio_video_speed or mio_video_flip by explicitly mentioning reversal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool vs alternatives. It focuses on credit and workspace information rather than use cases or exclusions. While it implies it's used for reverse playback, no guidance is given on scenarios or complementary tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_rotateBInspect
Rotate Video — Rotate videos 90, 180, or 270 degrees. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should fully disclose behavioral traits. It states 'processes in the browser' and mentions higher credit usage, but fails to mention whether the tool is destructive, how input is provided, or any side effects. The pricing details do not compensate for missing behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The first sentence is concise and front-loaded ('Rotate Video — Rotate videos 90, 180, or 270 degrees.'). However, the remaining sentences about credits, Day Pass, and pricing are tangential to the tool's purpose and could be shortened or moved to a separate note. They add length without aiding selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description should explain how the tool receives input and what the output is. It fails to mention whether a video file must be pre-selected or if the tool works on the current document. Sibling tools like mio_video_rotate are not differentiated in terms of invocation context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to explain parameter meaning. It adds value by specifying the rotation degrees (90, 180, 270) which are not in the schema. Baseline for 0 params is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Rotate Video') and specifies the allowed degrees (90, 180, or 270). It uniquely identifies the tool's purpose among siblings like flip or crop, and the verb-resource pair is unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like mio_video_flip or mio_video_crop. The description focuses on credit usage and pricing rather than usage context, leaving the agent without decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_speedBInspect
Change Video Speed — Speed up or slow down videos with audio pitch preservation. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must bear the full behavioral disclosure burden. It mentions pitch preservation and in-browser processing, but lacks critical details such as input/output format, side effects, or permission requirements. The agent is left guessing about runtime behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise. It front-loads the main action but includes pricing and workspace details that could be trimmed or moved to a separate field. Still, no sentence is entirely superfluous.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of input and output schema, the description should provide more context on how to invoke the tool (e.g., does it require a selected file?) and what the output is. The current description covers credits but leaves operational details ambiguous.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, making schema coverage 100% trivially. The description does not need to add parameter details, but it also does not explain that no parameters are required. Baseline of 4 is appropriate for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Change Video Speed' with specific verb and resource, and includes the important detail of audio pitch preservation. It distinguishes from the sibling 'mio_audio_speed' by focusing on video, though not explicitly contrasting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides credit and workspace usage context (e.g., uses more credits, not in Day Pass), but does not offer explicit guidance on when to use this tool versus other video tools like 'mio_video_trim' or 'mio_audio_speed'. No alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_thumbnailBInspect
Extract Video Thumbnail — Extract a frame from video as a JPEG thumbnail image. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided. The description discloses that the tool runs in the browser and uses credits, but fails to explain how input is provided (e.g., file selection via context) or how the output (JPEG thumbnail) is returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The purpose is front-loaded, but the description includes extensive billing details that could be shortened or moved. The structure is adequate but not optimally concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no parameters, the description must explain the full workflow. It omits how users provide the video and what the output looks like beyond 'JPEG thumbnail', leaving significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so no parameter documentation is needed. Baseline score of 4 applies as the description adds no confusion.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool extracts a video frame as a JPEG thumbnail, with a specific verb ('Extract') and resource ('video thumbnail'). It distinguishes from sibling tools by focusing on thumbnail extraction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives. It includes billing and credit information but does not explain scenarios or prerequisites for invoking the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_to_gifBInspect
Video to GIF — Convert video clips to animated GIF. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions browser processing and credit usage, adding some behavioral context beyond the empty annotations. However, it does not disclose potential side effects, input requirements, or error scenarios, missing important transparency for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise with four sentences, front-loading the purpose. Some pricing and workspace details could be considered extraneous but are not overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, no annotations, and no parameters, the description should explain inputs, outputs, and constraints. It only covers the output format and pricing, missing input specification and limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, but the tool clearly requires a video input. The description fails to explain how the input is provided or any implicit parameters, thus not adding meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Convert video clips to animated GIF', providing a specific verb (convert) and resource (video to GIF). It distinguishes from siblings like mio_video_gif_to_mp4 which does the reverse.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. The name implies its purpose, but the description does not provide direct comparisons or usage context beyond credit consumption.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_to_mp3AInspect
Video to MP3 — Extract audio from video files. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses browser processing, higher credit cost, and workspace restrictions—valuable behavioral context beyond the core function. It does not detail side effects (e.g., deletion of original) but covers key cost and availability traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficient, starting with a clear purpose and then adding relevant credit context. It does not waste words, though the credit info could be placed later. It remains concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description explains output format (MP3) and credit usage. However, it omits supported input video formats, file size limits, and processing time expectations, leaving some gaps for a complete understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description cannot add parameter details. However, it fails to explain how the video file is provided (e.g., upload, reference), which is a critical gap. Baseline for 0 params is 4, but the missing input method reduces clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Video to MP3 — Extract audio from video files,' specifying exact verb and resource. It distinguishes from similar conversion tools by focusing on audio extraction, and the name itself is descriptive.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use (extract audio) and provides helpful credit/workspace context. However, it doesn't explicitly contrast with sibling tools like 'mio_video_mute' or 'mio_video_add_audio,' leaving some ambiguity about alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_trimBInspect
Trim Video — Cut video to specific start and end times. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description notes that the tool 'processes in the browser' and mentions credit costs, but lacks details on behavioral traits like file handling, format support, or how start/end times are specified. Without annotations, the description partially covers transparency but leaves gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description includes a long pricing section that is tangentially related. The core function is front-loaded, but the extra content makes it less concise than ideal. Every sentence should earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0 parameters, no output schema, and the tool's trimming nature, the description lacks essential context: input format, time specification format, output details, and limitations. The pricing info does not compensate for these gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema is empty (0 parameters). While the baseline is 4, the description does not explain how users provide start and end times. This omission is critical for a trimming tool, reducing the score to 2.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Trim Video — Cut video to specific start and end times.' It uses a specific verb ('Cut') and resource ('Video'), and the action is distinct from siblings like crop or compress.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description focuses heavily on pricing and credit usage but does not provide guidelines on when to use this tool versus alternatives (e.g., mio_video_crop). No explicit 'when-to-use' or 'when-not-to-use' advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_wav_to_mp3AInspect
WAV to MP3 — Convert WAV audio to MP3 format. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries the full burden. It discloses that processing happens in the browser, credits are consumed, and explains pricing structure, which is helpful for an agent to understand implications of usage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is verbose, including extraneous pricing marketing that could be moved to a separate help tool. The crucial functionality is only mentioned in the first sentence; the rest is tangential.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While credit and pricing context is provided, the description lacks operational details such as how to provide the WAV file (e.g., via upload, URL). Without this, the agent may not know how to invoke the tool correctly. No output schema exists, so return behavior is also unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has no parameters (schema description coverage 100% trivially). The description adds no parameter-specific meaning, but with no parameters, it does not need to. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool converts WAV audio to MP3 format. However, it does not distinguish from other audio converters like mio_video_aac_to_mp3, which perform similar tasks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes credit consumption and pricing, but does not provide guidance on when to use this tool versus alternatives like mio_video_mp3_cutter or other audio converters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_webm_to_mp4AInspect
WebM to MP4 — Convert WebM videos to MP4 format. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries full behavioral disclosure. It mentions the tool runs in the browser, uses credits, and explains credit policies. However, it does not explicitly state that the operation is non-destructive or what happens to the original file, though conversion implies non-destructiveness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is slightly lengthy but each sentence serves a purpose: specifying the conversion, explaining credit usage, and clarifying pricing. It is well-structured and front-loaded with the core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description provides a good overview of purpose, behavior, and pricing. It could include technical limitations (e.g., max file size, codec support), but overall it is fairly complete for a parameterless conversion tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters and is fully covered (100%). With no parameters, the description naturally adds no parameter information, which is appropriate. Baseline score of 4 for no parameters is justified.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'WebM to MP4 — Convert WebM videos to MP4 format', providing a specific verb and resource. The tool's name also mirrors this, and the description distinguishes it from siblings by naming the exact input and output formats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives context about credit usage and browser processing but does not explicitly state when to use this tool versus other video converters. It implies usage for WebM-to-MP4 conversion but lacks guidance on prerequisites or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_video_wma_to_mp3CInspect
WMA to MP3 — Convert WMA audio to MP3 format. Video And Audio Studio run — processes in the browser but uses more credits per run than the Document / Image / Scanner workspaces. Welcome credits cover a limited number of runs. Day Pass does not include this workspace. All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description only mentions 'processes in the browser' and credit usage. It does not disclose whether the operation is destructive, what happens to the original file, or any limitations beyond credits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The purpose is front-loaded in the first sentence, but the rest is mostly pricing details that are better placed elsewhere. The description could be more concise without losing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no parameters and no output schema, the description omits crucial invocation context, such as how to provide the WMA file. The heavy focus on credits does not compensate for missing operational details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema coverage is 100%. The description adds no parameter information, which is acceptable given no parameters, but it fails to explain how the tool identifies the WMA source, leaving ambiguity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Convert WMA audio to MP3 format', which is a specific verb+resource. However, it does not differentiate from many sibling audio converter tools (e.g., mio_video_aac_to_mp3) beyond the format name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes extensive pricing and credit information but does not explain when to use this tool versus other converters, nor any prerequisites or alternatives. No guidance on how to provide the WMA file.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mio_xlsx_to_pdfBInspect
Excel to PDF — Convert Excel spreadsheets to PDF format. Runs in the browser. Covered by signup welcome credits and by the Day Pass (24-hour unlimited on this workspace group). All three credit-based workspaces unlock with the same one-time credit pack — there is no per-workspace subscription. See mioffice.ai/pricing for current plans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits. It mentions 'Runs in the browser' but fails to describe file size limits, whether the conversion is destructive, how output is handled, or any rate limits. Pricing info is not a behavioral trait.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with the core function. The pricing details are somewhat extraneous but not overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple conversion tool with no parameters, the description covers the basic function and pricing context but omits details like what happens after conversion (e.g., download link), input file handling, or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so baseline is 4 per guidelines. The description adds basic purpose ('Convert Excel spreadsheets to PDF format') but nothing about parameters since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Excel to PDF — Convert Excel spreadsheets to PDF format', which is a specific verb and resource. It distinguishes from sibling converters by naming the exact format transformation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternative conversion tools (e.g., other format converters). It only discusses credits and pricing, not selection criteria or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!