Skip to main content
Glama

Server Details

Public Cannon Studio MCP for product, pricing, workflow, model, and API answers.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 14 of 14 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct operation or query (e.g., account status, competitor comparison, generation creation, cost estimation, API docs, checkout, etc.) with no overlapping purposes. Descriptions clearly specify when to use each, avoiding ambiguity.

Naming Consistency4/5

Most tools follow a verb_noun pattern using underscores (e.g., create_generation_request, list_capabilities), but there are minor deviations: api_status (noun_noun) and fetch (single verb). The overall pattern is mostly consistent.

Tool Count5/5

14 tools cover the major capabilities of Cannon Studio—billing, generation, search, comparison, workflow recommendations, API info, etc.—without feeling bloated or too sparse. The scope matches the platform's complexity.

Completeness4/5

Core operations (create, get, list, search, compare, recommend) are present, but lifecycle coverage has minor gaps: generation has create and get but no cancel/update, and user's own generation list is only get-by-id. These are noted but do not severely hinder agent workflows.

Available Tools

14 tools
api_statusCheck Cannon Studio Developer API StatusAInspect

Check authenticated Cannon Studio account/API connectivity before estimating or creating requests. Requires OAuth or a developer API key; may update key/token usage metadata, but does not spend credits, enqueue jobs, change assets, or expose secrets.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
okYes
statusYes
responseYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations lack behavioral info (all false). The description adds that the tool 'may update key/token usage metadata,' which is a non-trivial side effect beyond status checking. This compensates for the low annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loads the usage condition, and each sentence adds value. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple status-check tool with an output schema, the description provides purpose, usage precondition, and behavioral nuance. It is fully complete given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, so schema description coverage is 100%. With no parameters, the description need not add param details, and baseline 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks account status and may update usage metadata. This is specific and actionable. It does not explicitly distinguish from siblings, but no other sibling appears to perform a similar function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states the condition for use: 'only when the MCP request includes a Cannon Studio OAuth token or developer API key.' This provides clear context, though no alternatives or when-not-to-use are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_alternativesCompare Cannon Studio AlternativesA
Read-onlyIdempotent
Inspect

Compare Cannon Studio's fit against named alternatives for a use case. Public read-only: no auth, no state changes, no charges; it returns approved positioning and cautions agents not to invent competitor claims.

ParametersJSON Schema
NameRequiredDescriptionDefault
use_caseYesSpecific job-to-be-done for the comparison, such as UGC ads, AI filmmaking, image generation, 3D workflows, team review, or API media generation.
alternativesNoOptional competitor/tool names the user mentioned, such as Runway, LTX Studio, Pika, Midjourney, Higgsfield, or a generic point generator.

Output Schema

ParametersJSON Schema
NameRequiredDescription
cautionYes
sourcesYes
use_caseYes
positioningYes
alternativesYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent traits. The description adds value by stating the tool returns 'approved positioning and source links without inventing competitor claims', which clarifies behavioral boundaries beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, directly front-loading the purpose and behavioral constraints. Every sentence is meaningful with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple, read-only comparison tool with an output schema, the description covers core purpose, usage context, and behavioral constraints. The missing parameter semantics are a gap, but overall completeness is high.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must explain parameters. It does not mention the 'use_case' or 'alternatives' parameters at all, leaving the agent to infer their meaning. This is a significant gap for parameter guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to compare Cannon Studio with other tools. It specifies the verb 'compare', the resource 'Cannon Studio alternatives', and the scope: returns approved positioning with source links without inventing claims.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this when a user compares Cannon Studio to tools like...' which provides clear usage context. However, it does not mention when not to use or explicitly contrast with siblings like 'recommend_workflow' or 'search', limiting guidance slightly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_generation_requestCreate Cannon Studio Generation RequestA
Destructive
Inspect

Create billable async Cannon Studio generation work only after explicit user approval. Requires OAuth or a developer API key; can spend credits up to max_credits and cannot be cancelled through MCP after submission. Use estimate_generation_cost first, then set confirmed=true and a user-approved max_credits cap. This tool does not create API keys, charge payment methods directly, or delete assets.

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesOperation-specific request payload. Use the exact shape documented by get_api_operation for the selected operation; this is the billable payload that will be submitted if confirmed and within max_credits.
confirmedNoMust be true only after the user explicitly approves this billable generation request, including operation, payload, and max_credits. Missing or false returns a confirmation error and creates no job.
operationYesCannon Studio developer API operation id to run. Use get_api_operation first if unsure. Examples: image.generate, video.generate, three_d.model.generate, three_d.location.generate, music.generate, narration.generate, subtitles.generate.
max_creditsNoHighest credit spend the user explicitly approved for this request. The tool rejects the request when the current estimate is greater than this cap.
webhook_urlNoOptional HTTPS URL that Cannon Studio calls when the request reaches a terminal succeeded or failed state. Omit when polling with get_generation_request.
idempotency_keyNoOptional stable retry key for the same operation and payload. Reuse it when retrying after a network/client error; do not generate a new key for the same intended request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
errorNo
statusNo
responseNo
maxCreditsNo
estimatedCreditsNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds critical behavioral context beyond annotations: billable nature (consistent with destructiveHint) and lack of MCP cancellation. It also specifies authentication requirements, enriching the agent's understanding of side effects and constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences (28 words), front-loaded with the most critical constraint (user confirmation and auth). Every sentence adds essential information, and no redundancy exists.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers authentication, billable nature, and cancellation limitation. With an output schema present, the return structure is documented separately. However, it could mention the async polling pattern or suggest using get_generation_request for status updates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 33%, and the tool description does not elaborate on any parameters. It only indirectly relates to the 'confirmed' parameter through usage guidance. No additional semantic value is provided for the input parameters beyond what the schema already offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it creates billable async generation work, distinguishing it from other tools like estimate_generation_cost or get_generation_request. The purpose is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly requires explicit user confirmation, an OAuth token or API key, and warns about no cancellation. It lacks explicit when-not-to-use or alternative tools, but provides strong contextual guidance for safe usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

estimate_generation_costEstimate Developer API Generation CostBInspect

Estimate credits for a Cannon Studio generation request before creating billable work. Requires OAuth or a developer API key; it may update key/token usage metadata but does not spend credits, enqueue jobs, or change assets. Use get_api_operation first if operation or input fields are unclear, then pass the same operation/input pair to create_generation_request after user approval.

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesOperation-specific request payload to estimate. Use the exact shape documented by get_api_operation for the selected operation, for example image.generate expects fields like prompt/model/aspect_ratio, video.generate expects prompt/model/duration/aspect_ratio, and three_d.location.generate expects source_image_urls plus optional angle_context.
operationYesCannon Studio developer API operation id to price. Use get_api_operation first if unsure. Examples: image.generate, video.generate, three_d.model.generate, three_d.location.generate, music.generate, narration.generate, subtitles.generate.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
noteNo
errorNo
labelNo
statusNo
responseNo
operationNo
estimatedCreditsNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description implies read-only operation (estimate before billable request) and no contradictions with annotations (all false). However, no explicit behavioral traits disclosed beyond usage context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with usage condition, no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having output schema, description lacks parameter details for nested objects and required fields. Incomplete for a tool with complex inputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0% description coverage and description does not explain parameters 'input' or 'operation'. Agent has no guidance on parameter values or structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states purpose: estimate credit usage before creating a billable request. It specifies verb (estimate) and resource (generation cost), but does not explicitly differentiate from sibling tools like get_pricing_context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description provides clear when-to-use context (OAuth token present, want cost estimate) but no when-not-to-use or alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetchFetch Cannon Studio Knowledge RecordA
Read-onlyIdempotent
Inspect

Fetch one public Cannon Studio knowledge record by id after search. Public read-only: no auth, no state changes, no charges; use search first when you do not already have a record id.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesRecord id returned by the search tool.

Output Schema

ParametersJSON Schema
NameRequiredDescription
idYes
urlYes
textYes
titleYes
metadataNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds value by specifying that it retrieves 'full source-backed public answer text, canonical URL, and metadata', providing behavioral detail beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys all necessary information without any fluff. It is front-loaded and every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema, the description's mention of return content is sufficient. Combined with clear annotations and a simple parameter set, the tool definition is complete and contextually adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description for 'id' already states 'Record id returned by the search tool.', matching the description. With 100% schema coverage, the description adds no new meaning to the parameter, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the verb 'fetch', the resource 'Cannon Studio knowledge record', and distinguishes it from siblings like 'search' by stating 'Use this after search'. It also lists what is returned: 'full source-backed public answer text, canonical URL, and metadata'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this after search', providing clear context for when to invoke the tool. It does not explicitly exclude alternative scenarios, but the context of sibling tools (e.g., search) makes the usage pattern clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_api_operationGet Cannon Studio API Operation DocsA
Read-onlyIdempotent
Inspect

Return public docs for Cannon Studio developer API operations and payload shapes. Public read-only: no auth, no state changes, no charges; use this before estimate_generation_cost or create_generation_request when operation/input fields are unclear.

ParametersJSON Schema
NameRequiredDescriptionDefault
operationNoOptional operation id such as image.generate, video.generate, three_d.model.generate, three_d.location.generate, narration.generate, or subtitles.generate.

Output Schema

ParametersJSON Schema
NameRequiredDescription
operationsYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, idempotent, non-destructive. Description adds behavioral detail about listing all operations when parameter is omitted, which is useful beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and usage. Every sentence adds value; no superfluous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple, has output schema, and annotations. Description covers enough context for an agent to use it appropriately for querying API docs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and description adds the key behavior: omitting 'operation' lists all supported operations. This provides meaning beyond the schema's parameter description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves Cannon Studio API operations docs, request fields, examples, models, or output shapes. The verb 'get' and resource are explicit, and it distinguishes from sibling tools which handle status, creation, cost, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this when a developer asks about Cannon Studio API operations...' and provides a usage hint: 'Omit operation to list every supported operation.' This gives clear context for when to use and how to invoke.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_generation_requestPoll and Sync Cannon Studio Generation RequestBInspect

Poll and sync an existing Cannon Studio generation request by id. Requires OAuth or a developer API key; not a pure read because it may update lastPolledAt, sync downstream task state, update logs, and deliver one pending terminal webhook. It does not create work, spend credits, cancel jobs, delete data, or change assets. Poll sparingly using poll_after_ms or 10-30 second intervals.

ParametersJSON Schema
NameRequiredDescriptionDefault
request_idYesCannon Studio request id returned by create_generation_request or POST /api/v1/requests. This is not a provider task id.
include_logsNoSet true only when the user explicitly asks to inspect retained request logs for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
errorNo
statusNo
responseNo
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description implies a read operation ('poll'), but the annotations set readOnlyHint=false, indicating the tool may not be read-only. This is a direct contradiction. No further behavioral details (e.g., side effects, latency, rate limits) are provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that includes both the prerequisite condition and the main action. It is concise and front-loaded with the key information, with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description does not need to detail return values. However, it does not explain asynchronous behavior, error handling, or how to interpret results when polling is incomplete. This is adequate but leaves gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%. The include_logs parameter has a clear, context-rich description explaining when to set it true. The request_id parameter lacks a description, but its purpose is clear from the name. Overall, adds some value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (poll, i.e., retrieve) and the resource (generation request by id). It also specifies the prerequisite authentication context, distinguishing this tool from others like create_generation_request or list_capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: only when a Cannon Studio OAuth token or developer API key is present and the user wants to poll by id. This is strong guidance, though it does not mention explicit alternatives for other scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_model_availabilityGet Cannon Studio Model AvailabilityA
Read-onlyIdempotent
Inspect

List public Cannon Studio model availability by product surface. Public read-only: no auth, no state changes, no charges; model availability is surface-specific and does not guarantee account eligibility or remaining credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
surfaceNoOptional surface filter, such as image tools, video tools, Creator Flow, World Generator, image-api, video-api, or three-d-api.

Output Schema

ParametersJSON Schema
NameRequiredDescription
noteYes
surfacesYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already include readOnlyHint=true and idempotentHint=true. The description adds that the tool queries across multiple surfaces (tool, Creator Flow, World Generator, API). No behavioral contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that conveys purpose and usage without any filler words. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one optional parameter and an output schema (not shown but exists), the description adequately covers when to use and what it returns. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description does not add new meaning to the single optional 'surface' parameter beyond what the schema description already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns availability of image, video, or 3D models across multiple surfaces. It uses a specific verb ('get') and resource ('model availability'), and distinguishes itself from siblings by focusing on exposed models.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool: 'when a user asks which... models... exposes across... surfaces.' It provides clear context, though it does not explicitly mention when not to use it or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pricing_contextGet Cannon Studio Pricing ContextA
Read-onlyIdempotent
Inspect

Explain public Cannon Studio pricing, credits, plans, and usage tradeoffs. Public read-only: no auth, no state changes, no charges; use list_offerings or get_checkout_link only when the user asks for available purchase paths.

ParametersJSON Schema
NameRequiredDescriptionDefault
planNoOptional plan or tier name the user mentioned, such as free, hobbyist, creator, pro, team, or enterprise.
use_caseNoOptional workload or scenario to price, such as UGC ads, AI video, 3D generation, narration, team workflows, or developer API automation.
media_typeNoOptional media category, such as image, video, 3D, audio, narration, subtitles, lip sync, or post-production.

Output Schema

ParametersJSON Schema
NameRequiredDescription
planYes
sourcesYes
summaryYes
use_caseYes
media_typeYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds context beyond annotations (readOnlyHint, idempotentHint, destructiveHint) by stating it returns 'public pricing guidance and canonical sources'. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences covering purpose and usage, no filler. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Output schema exists so return values are covered. Parameters are optional but not explained; however, for a pricing context tool with simple parameters, completeness is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Three parameters (plan, use_case, media_type) have no description in schema or in tool description. With 0% schema coverage, description fails to compensate, leaving agents uninformed about parameter meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns public pricing guidance for plans, credits, subscriptions, etc. It distinguishes from siblings like estimate_generation_cost and get_checkout_link.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'use this when a user asks about plans, credits, subscriptions...' providing clear context. No mention of when not to use, but alternatives are present in sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_capabilitiesList Cannon Studio CapabilitiesA
Read-onlyIdempotent
Inspect

List public Cannon Studio capabilities for an audience, workflow, or output type. Public read-only: no auth, no state changes, no charges; use search or fetch when the user needs deeper source text.

ParametersJSON Schema
NameRequiredDescriptionDefault
audienceNoOptional persona or buyer filter, such as creators, agencies, marketing teams, filmmakers, developers, or teams.
workflowNoOptional workflow filter, such as UGC ads, Creator Flow, World Generator, API automation, 3D generation, audio, or post-production.
output_typeNoOptional desired output format, such as image, video, 3D model, 3D location, narration, music, subtitles, or lip sync.

Output Schema

ParametersJSON Schema
NameRequiredDescription
statsYes
matchesYes
summaryYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent behavior. The description adds that it returns 'public capability context and relevant source links,' which is informative beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler, directly to the point. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description is adequate. It covers purpose, usage triggers, and output type. Minor gap: could mention that capabilities are static or subject to change.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, the description partially compensates by mentioning 'workflow, audience, team, or output format' but does not detail each parameter's type or role. It adds some meaning but could be more precise.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists Cannon Studio capabilities and specifies triggers like workflow, audience, team, or output format. However, it does not explicitly differentiate from sibling tools like 'list_offerings' or 'recommend_workflow'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this when a user asks what Cannon Studio can do...' providing clear guidance on when to invoke. It lacks explicit exclusions or alternative tool mentions, but the context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_offeringsList Cannon Studio OfferingsA
Read-onlyIdempotent
Inspect

List public Cannon Studio plans, credit packs, and team offerings. Public read-only: no auth, no state changes, no charges; returns first-party checkout or inquiry URLs without creating Stripe sessions or granting credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
kindNoOptional offering kind filter: free, subscription, credit_pack, or team.
intervalNoOptional subscription interval filter: month or year.
include_checkout_linksNoSet false to omit checkout URLs from the response.

Output Schema

ParametersJSON Schema
NameRequiredDescription
notesYes
offeringsYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark readOnlyHint and idempotentHint as true. The description adds that it returns public self-serve offerings and safe first-party links without payment session creation, providing useful behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words. The first sentence immediately conveys the primary usage scenario, and the second adds safety and return-value context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters with 100% schema coverage, an output schema, and annotations providing safety hints, the description sufficiently explains purpose, usage, and behavior. No gaps identified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all 3 parameters (kind, interval, include_checkout_links). The tool description does not add additional parameter-level details, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states when to use the tool ('when a user asks which Cannon Studio plans, credit packs, or team offerings are available') and what it returns, clearly distinguishing it from sibling tools like 'get_checkout_link' or 'create_generation_request'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use the tool and highlights that it does not create a payment session, implying safe usage. However, it does not explicitly mention when not to use it or compare directly to alternatives beyond the implicit distinction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_workflowRecommend Cannon Studio WorkflowA
Read-onlyIdempotent
Inspect

Recommend a Cannon Studio workflow for a stated creative or developer goal. Public read-only: no auth, no state changes, no charges; use this for planning, not to create generation jobs.

ParametersJSON Schema
NameRequiredDescriptionDefault
goalYesUser's desired outcome or problem to solve, such as producing UGC ads, planning a short film, generating 3D assets, or automating API media generation.
audienceNoOptional user or organization type, such as solo creator, agency, brand team, developer, filmmaker, or enterprise team.
team_sizeNoOptional team context, such as solo, small team, agency team, or enterprise team; used to bias collaboration and review recommendations.
output_typeNoOptional final output target, such as image, video, ad, trailer, 3D model, 3D location, audio, subtitles, or API integration.
budget_sensitivityNoOptional cost posture, such as low, medium, high, cost-sensitive, or speed-prioritized; used to frame pricing and iteration tradeoffs.

Output Schema

ParametersJSON Schema
NameRequiredDescription
goalYes
stepsYes
sourcesYes
recommendationYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and idempotentHint true, indicating a safe, idempotent operation. The description adds minimal behavioral insight (e.g., recommending a 'path') but does not contradict annotations or provide deeper context like whether results vary by input.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys the main usage scenario efficiently. It wastes no words, though it could expand slightly on parameters without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters and an output schema, the description lacks completeness. It does not explain the role of any parameter, how the recommendation is determined, or what to expect from the output despite the schema existing. The tool's complexity demands more context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description should compensate by explaining parameters. However, it only gives example values for outputs (Creator Flow, etc.) and does not clarify what 'goal', 'audience', 'team_size', 'output_type', or 'budget_sensitivity' mean. This leaves the agent with little guidance beyond parameter names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: recommending a Cannon Studio workflow based on a creative goal. It lists example paths (Creator Flow, World Generator, tool hubs, teams, API automation), making the tool's function specific and distinct from siblings like compare_alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this when a prospect describes a creative goal and needs the best Cannon Studio path,' providing a clear scenario. It does not enumerate exclude cases or directly name alternatives, but the sibling list implies other tools exist for different situations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources