Skip to main content
Glama

Server Details

Quiz.Video MCP: list, create, AI-generate, and render quiz and flashcard videos.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 24 of 24 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct operation (e.g., quiz generation vs. manual creation, render polling vs. download), and descriptions clearly differentiate them. No two tools have overlapping purposes.

Naming Consistency5/5

All tools follow a consistent `quiz_video_<verb>_<noun>` pattern (e.g., `quiz_video_create_quiz`, `quiz_video_delete_quiz_hook`), with a few API-catalog tools using `get_*` for consistency. No mixing of conventions.

Tool Count5/5

24 tools cover the full scope of the Quiz.Video API—quiz management, flashcards, hooks, renders, account info, music, and API discovery—without unnecessary duplication or missing essential operations.

Completeness4/5

CRUD operations are present for quizzes, hooks, renders, and flashcards. Minor gaps exist: no update or delete for individual quiz questions, and no update for flashcard decks. These are non-critical but noticeable.

Available Tools

24 tools
get_api_catalogGet API catalogA
Read-only
Inspect

Return the Quiz.Video API catalog linkset for agent discovery.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true. The description adds 'linkset' and 'for agent discovery' but does not elaborate on side effects or other behaviors. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with key information, no wasted words. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, no output schema), the description is largely complete. It could briefly explain 'linkset' but is sufficient for context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so baseline is 4. Description correctly implies no input needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'Return', the specific resource 'Quiz.Video API catalog linkset', and the purpose 'for agent discovery', distinguishing it from sibling tools like get_llms_txt and get_openapi_spec.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies use for agent discovery but does not contrast with other documentation retrieval tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_llms_txtGet LLM API summaryA
Read-only
Inspect

Return a compact LLM-readable summary of the Quiz.Video API.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds the context of 'compact' and 'LLM-readable', which complements the readOnlyHint annotation. However, it does not disclose other behavioral traits like caching, latency, or data freshness. Annotations already cover read-only behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, front-loaded sentence that conveys the tool's purpose with no unnecessary words. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters, no output schema, and a simple read-only operation, the description is complete. It specifies the output (summary) and its intended audience (LLM).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters and schema description coverage is 100%. The description does not need to add parameter details. Baseline for zero parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a compact LLM-readable summary of the Quiz.Video API, which distinguishes it from siblings like get_api_catalog (catalog) and get_openapi_spec (full spec). The verb 'return' and resource 'summary' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining an LLM-friendly summary, but it does not explicitly state when to use this tool over siblings (e.g., 'use get_openapi_spec for full details'). No alternative guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_openapi_specGet OpenAPI specificationA
Read-only
Inspect

Return the Quiz.Video OpenAPI 3.1 specification.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotation readOnlyHint=true already indicates a read-only operation. The description adds no further behavioral context (e.g., output format, rate limits). With annotations present, the description is adequate but not enhanced.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that directly states the purpose with no extraneous words. It is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with no parameters and basic annotations, the description is nearly complete. It could mention the output format, but the current description is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters and 100% coverage. Per guidelines with 0 parameters, baseline is 4. The description adds no parameter information, which is acceptable as there are none to describe.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Return' and the specific resource 'Quiz.Video OpenAPI 3.1 specification', which goes beyond the tool name and distinguishes it from similar tools like get_api_catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., get_api_catalog, get_llms_txt), nor any exclusions or prerequisites. This is a gap given the presence of sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_add_quiz_questionsAdd questions to a quizAInspect

Append one or more questions (with their answers and optional images) to an existing quiz.

ParametersJSON Schema
NameRequiredDescriptionDefault
quizIdYesQuiz ID in YouTube-style 11-character base64url format.
questionsYesOne or more questions to append to the quiz.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates it appends questions without replacing existing ones, which is useful. Annotations show readOnlyHint=false, destructiveHint=false, consistent with a non-destructive addition. However, it does not disclose idempotency (not idempotent per hint), error handling, or limits on question count, which would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, focused sentence that immediately conveys the tool's core functionality. No unnecessary words, and key elements (append, questions, answers, optional images, existing quiz) are front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should ideally mention the return value (e.g., success status, updated quiz). It also lacks error scenarios (e.g., quiz not found). For a simple append tool, the description is minimally adequate but leaves some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, so the description does not need to reiterate parameter details. It adds no extra meaning beyond the schema, e.g., clarifying that 'quizId' must be from an existing quiz. Baseline score is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Append'), the resource ('questions to an existing quiz'), and includes important details like answers and optional images. It effectively distinguishes from sibling tools like quiz_video_create_quiz and quiz_video_update_quiz.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides the basic purpose but lacks explicit guidance on when to use this tool versus alternatives such as quiz_video_create_quiz (for initial setup) or quiz_video_update_quiz (for modifying existing data). It does not mention prerequisites (e.g., quiz must already exist) or postconditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_create_flashcard_deckCreate a flashcard deckBInspect

Create a flashcard deck. Required: title (3-120 chars) and cards[] (min 1). Optional: description (≤1200 chars), tags (≤50 each).

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoOptional tags to categorize the deck (≤50 characters each).
cardsYesFlashcards in the deck; at least one card is required.
titleYesDeck title (3–120 characters).
descriptionNoOptional deck description (≤1200 characters).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description only says 'Create' which is consistent with annotations (readOnlyHint=false). No additional behavioral context beyond what annotations imply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, zero waste. Every character earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple creation tool with full schema coverage and no output schema, the description is adequate. Could mention return value but not necessary given typical expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3. Description succinctly summarizes required and optional fields but adds minimal new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Create a flashcard deck' and specifies required fields (title, cards) and optional fields. It distinguishes from siblings like delete, get, update tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives (e.g., update or delete). Does not mention when not to use it or any prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_create_quizCreate a quizAInspect

Create a quiz. Prefer sending themeDescription or themeCustomization so the saved quiz has a custom visual theme; if omitted, the server derives one from the title/description. Omit backgroundMusicId to use default shared background music, or set null for silent. Required: title. Optional: description, format, quizType, template, countdownSeconds, difficulty, musicVolume, and questions[].

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesHuman-readable quiz title.
formatNoOutput aspect ratio: "tiktok" (9:16 vertical) or "youtube" (16:9 horizontal).
quizTypeNoQuiz mechanic: multiple_choice, reveal_answer, or picture_guess.
templateNoVisual template id (e.g. "neon", "minimal").
questionsNoOptional initial questions with their answers and images.
difficultyNoTarget difficulty level for the generated/created quiz.
descriptionNoOptional longer description shown on the quiz page.
musicVolumeNoBackground music volume from 0 (silent) to 1 (full). Default 0.15.
countdownSecondsNoSeconds of countdown shown before each question (3–15).
themeDescriptionNoNatural-language custom visual theme prompt. Example: "golden luxury game show", "ocean glass", or "cyber neon". The server saves the generated themeCustomization and applies it automatically.
backgroundMusicIdNoBackground music track id from /api/v1/music. Omit to use the default shared track; set null for silent.
themeCustomizationNoExplicit custom theme to save and apply to the quiz. Invalid colors/fonts are ignored by the API sanitizer.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Descriptions add key behavioral context: server derives theme if omitted, default background music behavior, sanitizer ignores invalid colors/fonts. Annotations indicate mutability (readOnlyHint=false) and open world (openWorldHint=true), and the description aligns with these without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences. The first sentence clearly states the purpose, and the second sentence packs critical usage tips for key parameters. No wasted words, every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 12 parameters, complex nested arrays, and no output schema, the description is adequate but lacks information about the return value (e.g., created quiz ID or object). Given the schema coverage and annotations, it covers most usage but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema coverage is 100%, the description provides extra semantic value: it recommends sending themeDescription/themeCustomization for custom themes, explains default derivation, and clarifies music volume default (0.15) which the schema does not specify. This significantly aids correct parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Create a quiz' and lists required/optional fields. However, it does not differentiate from the sibling tool 'quiz_video_generate_quiz', which has a similar purpose but uses AI generation. The verb+resource is clear, but lacking sibling distinction prevents a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides some guidance: prefer themeDescription/themeCustomization for custom theme, omit backgroundMusicId for default or set null for silent. Mentions required vs optional parameters. However, it does not specify when to use this tool over the 'generate_quiz' alternative, nor does it cover prerequisites or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_create_quiz_hookCreate a quiz hookBInspect

Create a hook for a quiz. hook is a pass-through object whose fields follow the HookInput schema (see OpenAPI spec).

ParametersJSON Schema
NameRequiredDescriptionDefault
hookYesHookInput object (hookType, positionType, title, content, plus optional styling — see OpenAPI spec).
quizIdYesQuiz ID in YouTube-style 11-character base64url format.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate a non-read-only, non-destructive operation, which aligns with the description. However, the description adds minimal behavioral context beyond creation, such as success/failure conditions or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. The first sentence states the purpose, the second explains the key parameter. Efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with two parameters and no output schema, the description provides basic understanding. However, it lacks details on return values, error handling, or interaction with quiz existence. Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description reinforces the pass-through nature of the hook parameter but adds little beyond what the schema already states. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The name, title, and description clearly indicate the tool creates a quiz hook. It distinguishes from sibling tools like update, delete, and list hooks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., update or delete hooks). It lacks context about prerequisites, such as requiring an existing quiz.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_create_renderStart renderAInspect

Queue a new video render for an existing quiz. Returns the render sessionId; poll quiz_video_get_render until its status is "completed" (typically 1-5 minutes), then call quiz_video_download_render to obtain the signed MP4 URL. The quiz itself is viewable immediately at /quiz/{slug}/ regardless of render status.

ParametersJSON Schema
NameRequiredDescriptionDefault
quizIdYesID of the quiz to render into a video.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are minimal (readOnlyHint=false, destructiveHint=false). Description adds crucial behavioral context: asynchronous nature, polling requirement, and quiz viewability. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. First sentence states action and return value; second sentence explains next steps and additional information. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers the return value (sessionId), workflow (poll, download), timing, and concurrent quiz availability. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with description of quizId. The tool's overall description adds 'existing quiz' context, but the parameter description in schema already adequately explains the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it queues a new video render for an existing quiz, using specific verb 'queue' and resource 'video render'. It distinguishes from sibling tools like quiz_video_get_render and quiz_video_download_render by outlining the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear workflow: poll status after queuing, then download. Mentions typical wait time (1-5 min) and that quiz is viewable immediately. Lacks explicit when-not-to-use or alternatives, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_delete_flashcard_deckDelete a flashcard deckA
DestructiveIdempotent
Inspect

Permanently delete a flashcard deck and all of its cards.

ParametersJSON Schema
NameRequiredDescriptionDefault
deckIdYesFlashcard deck ID.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare destructiveHint=true and readOnlyHint=false, so the description's mention of 'Permanently delete' aligns but adds no new behavioral context (e.g., whether deletion cascades to related quizzes or requires confirmation).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no redundant or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete tool with one required parameter and no output schema, the description adequately explains the operation. However, it could mention the expected response (e.g., success or failure indication) for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the single parameter deckId is already documented as 'Flashcard deck ID' in the schema. The description does not add any additional meaning or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Permanently delete') and the resource ('flashcard deck and all of its cards'), distinguishing it from sibling tools like create, get, list, and update.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., update_flashcard_deck to modify), or any prerequisites or side effects beyond what is obvious from the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_delete_quizDelete a quizA
DestructiveIdempotent
Inspect

Permanently delete a quiz and all of its questions, answers, and hooks.

ParametersJSON Schema
NameRequiredDescriptionDefault
quizIdYesQuiz ID in YouTube-style 11-character base64url format.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate destructiveHint=true and idempotentHint=true. The description adds cascading deletion of questions, answers, and hooks, which is beyond the annotations. However, it does not disclose authorization requirements or rate limits. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that immediately conveys the action and scope. Every word is necessary, and no extraneous information is included. It is front-loaded with the verb 'delete'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple deletion tool with one parameter and no output schema, the description adequately covers the purpose, scope, and permanence. No additional details are needed for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of the single parameter (quizId) with a descriptive format hint. The description does not add any semantics beyond what the schema provides, so a score of 3 is appropriate per guidelines.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'delete' and resource 'quiz', and explicitly states cascading deletion of 'questions, answers, and hooks'. This distinguishes it from sibling tools like quiz_video_delete_quiz_hook, which only deletes a single hook. The purpose is unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states 'permanently delete', implying irreversible action, but does not explicitly guide when to use this tool versus alternatives (e.g., quiz_video_delete_quiz_hook). The context of cascading deletion is implied, but no exclusions or selection criteria are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_delete_quiz_hookDelete a quiz hookA
DestructiveIdempotent
Inspect

Delete a single hook from a quiz.

ParametersJSON Schema
NameRequiredDescriptionDefault
hookIdYesNumeric id of the hook to delete.
quizIdYesQuiz ID in YouTube-style 11-character base64url format.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark it as destructive (destructiveHint=true). The description adds no further behavioral context (e.g., side effects, return value, or confirmation). With destructiveHint true, description carries minimal extra value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no redundant words, front-loaded with verb and resource. Efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete operation with clear parameters and annotations, the description is mostly complete. Could briefly mention that deletion is immediate and irreversible, but no output schema is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and parameter descriptions are adequate. The description does not add meaning beyond the schema, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete'), the target resource ('a single hook'), and the context ('from a quiz'), distinguishing it from sibling tools like create or update hooks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool (e.g., hook must exist, required permissions, or that it's irreversible). Lacks exclusion criteria or alternatives like deleting all hooks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_download_renderGet render download URLA
Idempotent
Inspect

Request a signed download URL for a completed render.

ParametersJSON Schema
NameRequiredDescriptionDefault
sessionIdYesRender session id for a completed render.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate idempotency and non-destructive nature. The description adds that the URL is 'signed,' but does not disclose additional behaviors like expiration or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded, no wasted words. Efficiently conveys the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is nearly complete. Minor gaps: no mention of URL format or temporary nature, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description does not add meaning beyond the schema's parameter description. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'request' and the resource 'signed download URL for a completed render.' It effectively distinguishes from siblings like quiz_video_create_render and quiz_video_get_render.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for downloading a completed render but does not explicitly state prerequisites or when to avoid use. No guidance on alternatives like quiz_video_get_render.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_generate_quizGenerate a quiz with AIAInspect

AI-generate and save a quiz from a topic. Prefer providing themeDescription or themeCustomization; when omitted, the server derives and saves a topic-based custom theme. Omit backgroundMusicId to use default shared background music, or set null for silent. The response data always includes a watchUrl (the public quiz-viewer page, instantly playable). When autoRender is true, data.render also contains the queued render session so the agent can poll quiz_video_get_render for the MP4.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesSubject the AI should build the quiz around.
formatNoOutput aspect ratio. Defaults to "tiktok".
quizTypeNoQuiz mechanic to generate.
templateNoVisual template id. If omitted, the saved custom theme can suggest a matching template.
autoRenderNoIf true, immediately queue a video render for the new quiz. The render session (sessionId, status) is returned under `data.render`; poll quiz_video_get_render with that sessionId for progress and the final videoUrl. Rendering typically takes 1-5 minutes. Quiz creation is not blocked by render-queue failures — the quiz is returned either way.
difficultyNoTarget difficulty level.
musicVolumeNoBackground music volume from 0 (silent) to 1 (full). Default 0.15.
extraDirectionNoAdditional instructions to steer the AI (tone, focus areas, exclusions).
countdownSecondsNoSeconds of countdown shown before each question (3-15).
progressBarStyleNoCountdown progress indicator style.
themeDescriptionNoNatural-language custom visual theme prompt. Example: "golden luxury game show", "ocean glass", or "cyber neon".
answerOptionCountNoFor multiple-choice quizzes, generate 3 or 4 answer options per question. Defaults to 4.
backgroundMusicIdNoBackground music track id from /api/v1/music. Omit to use the default shared track; set null for silent.
numberOfQuestionsNoHow many questions to generate (1–20).
themeCustomizationNoExplicit custom theme to save and apply to the generated quiz. Use themeDescription for prompt-style themes.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it explains that the response includes a watchUrl, that autoRender triggers a queued render session, and that quiz creation is not blocked by render failures. This complements the annotations (readOnlyHint false, openWorldHint true) well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with four sentences, each adding value. It front-loads the primary action and logically flows into guidance and response details. No redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (14 parameters, nested object, no output schema), the description covers the essential aspects: AI generation, theme customization, music, and rendering. It explains the response structure (watchUrl, render session) and links to a sibling tool for polling. Minor gap: the response structure for non-autoRender cases is not fully detailed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds extra semantics: it recommends using themeDescription or themeCustomization over omission, explains the default behavior for backgroundMusicId, and clarifies the effect of autoRender on the response. This goes beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates and saves a quiz from a topic. However, it does not differentiate itself from the sibling tool 'quiz_video_create_quiz', which may serve a similar purpose. The verb 'AI-generate' implies AI involvement but lacks explicit distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance on preferring themeDescription or themeCustomization and explains behavior when omitted. It also covers autoRender polling. However, it does not specify when to use this tool over alternatives like quiz_video_create_quiz, nor does it state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_get_accountGet accountA
Read-only
Inspect

Get the authenticated user's account info, plan, and usage limits.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context beyond the readOnlyHint annotation by specifying the returned data includes account info, plan, and usage limits. It does not contradict the annotation and provides clear behavioral scope without unnecessary details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no redundant words. It efficiently communicates the tool's purpose and output, earning its place without excess.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only account retrieval tool with no parameters and no output schema, the description provides sufficient information. It could optionally mention response format or authentication context, but is complete enough for typical usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With no parameters (0 params, 100% schema coverage), the description adds value by stating what the tool returns. The baseline for 0 parameters is 4, and the description meets that by explaining the output meaningfully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the authenticated user's account info, plan, and usage limits. This specific verb+resource combination (get account) distinguishes it from siblings, which focus on quizzes, flashcards, renders, and other features.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates use when account details are needed, but lacks explicit when-not or alternative tools. Given the diverse sibling set, a note like 'For other user-related tasks, see other tools' would be beneficial, but the purpose is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_get_flashcard_deckGet a flashcard deckA
Read-only
Inspect

Fetch a flashcard deck (including all cards) by id.

ParametersJSON Schema
NameRequiredDescriptionDefault
deckIdYesFlashcard deck ID.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true. The description adds value by specifying that it includes all cards, which is beyond what annotations offer. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, front-loaded with the key action, and contains no filler or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter fetch tool with no output schema, the description fully covers what the tool does and what it returns. It is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the description 'Flashcard deck ID.' The description does not add meaning beyond what the schema already provides, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Fetch' and the resource 'flashcard deck (including all cards) by id'. It distinguishes this tool from sibling tools like quiz_video_list_flashcard_decks (which lists all decks) and create/delete variants.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The usage context is implied: use this to get a specific deck by ID, and list to get all decks. No explicit when-not or alternatives are stated, but the single-purpose nature makes it clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_get_quizGet a quizA
Read-only
Inspect

Fetch a single quiz (including settings and metadata) by id.

ParametersJSON Schema
NameRequiredDescriptionDefault
quizIdYesQuiz ID in YouTube-style 11-character base64url format.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the agent knows this is safe. The description adds that the response includes settings and metadata, providing some extra behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that immediately communicates the tool's purpose. No redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (single required ID parameter, no output schema), the description is sufficiently complete to inform the agent about what the tool does and what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a single parameter described as 'Quiz ID in YouTube-style 11-character base64url format.' The description does not add additional parameter details, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('fetch a single quiz') and the resource ('quiz'), including what is included (settings and metadata), distinguishing it from sibling tools like list or update.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving a specific quiz by ID. While it doesn't explicitly mention when not to use or alternatives, the context of sibling tool names (e.g., list_quizzes, update_quiz) makes the use case clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_get_renderGet render statusA
Read-only
Inspect

Fetch the status and progress of a render session. When status is "completed", the response also contains a signed videoUrl (and filename) so the agent can share the MP4 directly without a separate quiz_video_download_render call. In-progress polls return status + progress.

ParametersJSON Schema
NameRequiredDescriptionDefault
sessionIdYesRender session id returned when the render was started.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds behavioral context beyond the readOnlyHint annotation by detailing response contents for completed (videoUrl, filename) and in-progress (progress) states. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each adding essential information. It is front-loaded with the core purpose and efficiently conveys key details without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains response behaviors for both completed and in-progress states. For a simple polling tool with one parameter, this is complete and actionable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'sessionId' has a clear description in the schema. The description adds no additional meaning beyond what the schema provides. Schema coverage is 100%, so baseline score 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (fetch) and resource (status/progress of render session). It also differentiates from sibling quiz_video_download_render by noting that completed responses include videoUrl/filename, eliminating the need for a separate call.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use it (to check status), and explicitly states that if completed, the agent can share directly without a separate download call. However, it does not explicitly mention when not to use it or provide alternatives for other scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_list_flashcard_decksList flashcard decksA
Read-only
Inspect

List flashcard decks owned by the authenticated user with optional pagination.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-indexed page number.
limitNoResults per page (1–100, default 20).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, making the safe read nature clear. The description adds ownership scope and pagination detail, but does not disclose other potential behaviors like sorting, default limits, or response structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the core purpose. No redundant or unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (list with pagination) and the presence of annotations, the description adequately covers ownership and pagination. However, it lacks details about ordering or output format, though no output schema exists to compensate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for page and limit. The description only adds 'with optional pagination', which does not provide new semantic meaning beyond the schema. Baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'List flashcard decks' and specifies they are 'owned by the authenticated user', distinguishing it from sibling tools like get_flashcard_deck (single deck) or create/delete operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions optional pagination but does not provide explicit guidance on when to use this tool versus alternatives like get_flashcard_deck for retrieving a specific deck. The usage context is implied rather than stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_list_musicList music libraryA
Read-only
Inspect

List available background music tracks.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark readOnlyHint=true, so the description adds no new behavioral context. It does not mention return format, pagination, or any side effects. The description fails to add value beyond the annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no unnecessary words. Direct and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool with readOnly annotation, the description is mostly sufficient. However, it could mention that the result is a list of music tracks to be more explicit, but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has zero parameters with 100% coverage, so baseline is 4. Description adds no parameter info, but none is needed. It correctly implies no parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('list') and resource ('background music tracks'). It effectively distinguishes from sibling list tools like list_quizzes or list_flashcard_decks by specifying the unique resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., other list tools). The description only states what it does without providing context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_list_quiz_hooksList quiz hooksA
Read-only
Inspect

List video hooks configured for a quiz.

ParametersJSON Schema
NameRequiredDescriptionDefault
quizIdYesQuiz ID in YouTube-style 11-character base64url format.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, and the description's 'List' action is consistent with a read-only operation. The description adds context that the hooks are 'configured for a quiz', which is useful but not extensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no wasted words. It is well-structured and front-loaded with the action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the tool is simple (one parameter, read-only), the description does not explain the return format or behavior (e.g., whether it returns an array or paginated results). Given no output schema, more detail would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description of the quizId parameter. The description does not add additional parameter semantics beyond what the schema provides, warranting a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List video hooks configured for a quiz' clearly states the action (list) and the resource (video hooks) with a specific scope (for a quiz). It distinguishes from sibling tools like create, delete, and update variants.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as quiz_video_create_quiz_hook or quiz_video_update_quiz_hook. The context is implied only by the tool name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_list_quiz_questionsList quiz questionsB
Read-only
Inspect

List questions (and their answers) for a quiz.

ParametersJSON Schema
NameRequiredDescriptionDefault
quizIdYesQuiz ID in YouTube-style 11-character base64url format.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

readOnlyHint annotation already indicates no side effects. The description adds that answers are included, which is useful. No mention of pagination, limits, or ordering, but basic transparency is met.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is front-loaded with the key action. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and no description of return format. Lacks details about error handling or response structure, which is important for a list endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with a clear description for quizId. The description does not add extra meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists questions and their answers for a quiz, using specific verb 'list' and resource 'questions'. It distinguishes from siblings like quiz_video_add_quiz_questions and quiz_video_get_quiz.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like quiz_video_get_quiz. No prerequisites or context provided; the agent must infer that a quiz must exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_list_quizzesList quizzesA
Read-only
Inspect

List quizzes owned by the authenticated user with optional pagination (page, limit).

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-indexed page number.
limitNoResults per page (1–100, default 20).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, and the description adds pagination and ownership scope, but no additional behavioral traits like rate limits or return structure are disclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the purpose and pagination details, with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only list tool with complete schema and annotations, the description is mostly adequate, though it could optionally mention the response format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description's mention of pagination parameters adds no new meaning beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'list', the resource 'quizzes owned by the authenticated user', and includes pagination details, distinguishing it from other quiz tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives like get_quiz or create_quiz, leaving the agent to infer based on the verb.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_update_quizUpdate quiz settingsA
Idempotent
Inspect

Update a quiz. updates accepts any subset of quiz settings (title, description, format, template, timing, music, TTS, publish status, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
quizIdYesQuiz ID in YouTube-style 11-character base64url format.
updatesYesPartial quiz settings object; only included fields are updated.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate idempotency, non-read-only, and non-destructive behavior. The description adds no additional behavioral context beyond what the annotations and schema provide, such as error handling or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the core action and the flexible nature of updates. Every word earns its place, with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity with well-documented parameters and annotations, the description is adequate. However, it omits any mention of return values or error conditions, which would be helpful since no output schema is present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for all parameters. The description merely summarizes the parameter types ('updates accepts any subset of quiz settings') without adding new meaning or constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Update a quiz') and specifies the resource and the flexible nature of updates. It lists the types of settings that can be updated, making the purpose precise and distinguishing it from create/delete sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates use for updating existing quizzes but provides no explicit guidance on when to use this tool versus alternatives like create or delete. No when-not-to-use or alternative recommendations are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quiz_video_update_quiz_hookUpdate a quiz hookA
Idempotent
Inspect

Update an existing hook on a quiz. Requires quizId and numeric hookId; updates is a partial HookInput.

ParametersJSON Schema
NameRequiredDescriptionDefault
hookIdYesNumeric id of the hook to update (from list_quiz_hooks).
quizIdYesQuiz ID in YouTube-style 11-character base64url format.
updatesYesPartial HookInput object; only included fields are updated.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate that the tool is not read-only, not destructive, idempotent, and open-world. The description adds that updates are partial, which aligns with idempotency. However, it does not elaborate on side effects, permissions, or response behavior beyond that.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 18 words, efficiently communicating the action, required parameters, and the nature of the updates parameter. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the core purpose and requirements but does not mention what the tool returns (no output schema) or any constraints. For a mutation tool, it could be more complete by noting typical responses or prerequisites.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters are described in the schema (100% coverage). The description rephrases the schema: hookId is numeric, updates is partial HookInput. It adds no significant new meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Update an existing hook on a quiz', specifying the action (update) and resource (quiz hook). It also mentions key requirements (quizId, numeric hookId) and the nature of the updates parameter, distinguishing it from sibling tools for creating, deleting, or listing hooks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating it updates an existing hook and requires a hookId, but it does not explicitly provide guidance on when to use this tool versus alternatives like create or delete hooks. No when-not-to-use or context is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources