Cannon Studio
Server Details
Public Cannon Studio MCP for product, pricing, workflow, model, and API answers.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 14 of 14 tools scored. Lowest: 3.1/5.
Each tool targets a distinct operation or query (e.g., account status, competitor comparison, generation creation, cost estimation, API docs, checkout, etc.) with no overlapping purposes. Descriptions clearly specify when to use each, avoiding ambiguity.
Most tools follow a verb_noun pattern using underscores (e.g., create_generation_request, list_capabilities), but there are minor deviations: api_status (noun_noun) and fetch (single verb). The overall pattern is mostly consistent.
14 tools cover the major capabilities of Cannon Studio—billing, generation, search, comparison, workflow recommendations, API info, etc.—without feeling bloated or too sparse. The scope matches the platform's complexity.
Core operations (create, get, list, search, compare, recommend) are present, but lifecycle coverage has minor gaps: generation has create and get but no cancel/update, and user's own generation list is only get-by-id. These are noted but do not severely hinder agent workflows.
Available Tools
14 toolsapi_statusCheck Cannon Studio Developer API StatusAInspect
Check authenticated Cannon Studio account/API connectivity before estimating or creating requests. Requires OAuth or a developer API key; may update key/token usage metadata, but does not spend credits, enqueue jobs, change assets, or expose secrets.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| status | Yes | |
| response | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations lack behavioral info (all false). The description adds that the tool 'may update key/token usage metadata,' which is a non-trivial side effect beyond status checking. This compensates for the low annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loads the usage condition, and each sentence adds value. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple status-check tool with an output schema, the description provides purpose, usage precondition, and behavioral nuance. It is fully complete given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so schema description coverage is 100%. With no parameters, the description need not add param details, and baseline 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks account status and may update usage metadata. This is specific and actionable. It does not explicitly distinguish from siblings, but no other sibling appears to perform a similar function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states the condition for use: 'only when the MCP request includes a Cannon Studio OAuth token or developer API key.' This provides clear context, though no alternatives or when-not-to-use are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_alternativesCompare Cannon Studio AlternativesARead-onlyIdempotentInspect
Compare Cannon Studio's fit against named alternatives for a use case. Public read-only: no auth, no state changes, no charges; it returns approved positioning and cautions agents not to invent competitor claims.
| Name | Required | Description | Default |
|---|---|---|---|
| use_case | Yes | Specific job-to-be-done for the comparison, such as UGC ads, AI filmmaking, image generation, 3D workflows, team review, or API media generation. | |
| alternatives | No | Optional competitor/tool names the user mentioned, such as Runway, LTX Studio, Pika, Midjourney, Higgsfield, or a generic point generator. |
Output Schema
| Name | Required | Description |
|---|---|---|
| caution | Yes | |
| sources | Yes | |
| use_case | Yes | |
| positioning | Yes | |
| alternatives | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent traits. The description adds value by stating the tool returns 'approved positioning and source links without inventing competitor claims', which clarifies behavioral boundaries beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, directly front-loading the purpose and behavioral constraints. Every sentence is meaningful with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple, read-only comparison tool with an output schema, the description covers core purpose, usage context, and behavioral constraints. The missing parameter semantics are a gap, but overall completeness is high.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must explain parameters. It does not mention the 'use_case' or 'alternatives' parameters at all, leaving the agent to infer their meaning. This is a significant gap for parameter guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to compare Cannon Studio with other tools. It specifies the verb 'compare', the resource 'Cannon Studio alternatives', and the scope: returns approved positioning with source links without inventing claims.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this when a user compares Cannon Studio to tools like...' which provides clear usage context. However, it does not mention when not to use or explicitly contrast with siblings like 'recommend_workflow' or 'search', limiting guidance slightly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_generation_requestCreate Cannon Studio Generation RequestADestructiveInspect
Create billable async Cannon Studio generation work only after explicit user approval. Requires OAuth or a developer API key; can spend credits up to max_credits and cannot be cancelled through MCP after submission. Use estimate_generation_cost first, then set confirmed=true and a user-approved max_credits cap. This tool does not create API keys, charge payment methods directly, or delete assets.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | Operation-specific request payload. Use the exact shape documented by get_api_operation for the selected operation; this is the billable payload that will be submitted if confirmed and within max_credits. | |
| confirmed | No | Must be true only after the user explicitly approves this billable generation request, including operation, payload, and max_credits. Missing or false returns a confirmation error and creates no job. | |
| operation | Yes | Cannon Studio developer API operation id to run. Use get_api_operation first if unsure. Examples: image.generate, video.generate, three_d.model.generate, three_d.location.generate, music.generate, narration.generate, subtitles.generate. | |
| max_credits | No | Highest credit spend the user explicitly approved for this request. The tool rejects the request when the current estimate is greater than this cap. | |
| webhook_url | No | Optional HTTPS URL that Cannon Studio calls when the request reaches a terminal succeeded or failed state. Omit when polling with get_generation_request. | |
| idempotency_key | No | Optional stable retry key for the same operation and payload. Reuse it when retrying after a network/client error; do not generate a new key for the same intended request. |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | No | |
| error | No | |
| status | No | |
| response | No | |
| maxCredits | No | |
| estimatedCredits | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds critical behavioral context beyond annotations: billable nature (consistent with destructiveHint) and lack of MCP cancellation. It also specifies authentication requirements, enriching the agent's understanding of side effects and constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences (28 words), front-loaded with the most critical constraint (user confirmation and auth). Every sentence adds essential information, and no redundancy exists.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers authentication, billable nature, and cancellation limitation. With an output schema present, the return structure is documented separately. However, it could mention the async polling pattern or suggest using get_generation_request for status updates.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 33%, and the tool description does not elaborate on any parameters. It only indirectly relates to the 'confirmed' parameter through usage guidance. No additional semantic value is provided for the input parameters beyond what the schema already offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it creates billable async generation work, distinguishing it from other tools like estimate_generation_cost or get_generation_request. The purpose is specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly requires explicit user confirmation, an OAuth token or API key, and warns about no cancellation. It lacks explicit when-not-to-use or alternative tools, but provides strong contextual guidance for safe usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_generation_costEstimate Developer API Generation CostBInspect
Estimate credits for a Cannon Studio generation request before creating billable work. Requires OAuth or a developer API key; it may update key/token usage metadata but does not spend credits, enqueue jobs, or change assets. Use get_api_operation first if operation or input fields are unclear, then pass the same operation/input pair to create_generation_request after user approval.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | Operation-specific request payload to estimate. Use the exact shape documented by get_api_operation for the selected operation, for example image.generate expects fields like prompt/model/aspect_ratio, video.generate expects prompt/model/duration/aspect_ratio, and three_d.location.generate expects source_image_urls plus optional angle_context. | |
| operation | Yes | Cannon Studio developer API operation id to price. Use get_api_operation first if unsure. Examples: image.generate, video.generate, three_d.model.generate, three_d.location.generate, music.generate, narration.generate, subtitles.generate. |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | No | |
| note | No | |
| error | No | |
| label | No | |
| status | No | |
| response | No | |
| operation | No | |
| estimatedCredits | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description implies read-only operation (estimate before billable request) and no contradictions with annotations (all false). However, no explicit behavioral traits disclosed beyond usage context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with usage condition, no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having output schema, description lacks parameter details for nested objects and required fields. Incomplete for a tool with complex inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0% description coverage and description does not explain parameters 'input' or 'operation'. Agent has no guidance on parameter values or structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states purpose: estimate credit usage before creating a billable request. It specifies verb (estimate) and resource (generation cost), but does not explicitly differentiate from sibling tools like get_pricing_context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides clear when-to-use context (OAuth token present, want cost estimate) but no when-not-to-use or alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetchFetch Cannon Studio Knowledge RecordARead-onlyIdempotentInspect
Fetch one public Cannon Studio knowledge record by id after search. Public read-only: no auth, no state changes, no charges; use search first when you do not already have a record id.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Record id returned by the search tool. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| url | Yes | |
| text | Yes | |
| title | Yes | |
| metadata | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds value by specifying that it retrieves 'full source-backed public answer text, canonical URL, and metadata', providing behavioral detail beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys all necessary information without any fluff. It is front-loaded and every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description's mention of return content is sufficient. Combined with clear annotations and a simple parameter set, the tool definition is complete and contextually adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description for 'id' already states 'Record id returned by the search tool.', matching the description. With 100% schema coverage, the description adds no new meaning to the parameter, resulting in a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the verb 'fetch', the resource 'Cannon Studio knowledge record', and distinguishes it from siblings like 'search' by stating 'Use this after search'. It also lists what is returned: 'full source-backed public answer text, canonical URL, and metadata'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this after search', providing clear context for when to invoke the tool. It does not explicitly exclude alternative scenarios, but the context of sibling tools (e.g., search) makes the usage pattern clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_operationGet Cannon Studio API Operation DocsARead-onlyIdempotentInspect
Return public docs for Cannon Studio developer API operations and payload shapes. Public read-only: no auth, no state changes, no charges; use this before estimate_generation_cost or create_generation_request when operation/input fields are unclear.
| Name | Required | Description | Default |
|---|---|---|---|
| operation | No | Optional operation id such as image.generate, video.generate, three_d.model.generate, three_d.location.generate, narration.generate, or subtitles.generate. |
Output Schema
| Name | Required | Description |
|---|---|---|
| operations | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, idempotent, non-destructive. Description adds behavioral detail about listing all operations when parameter is omitted, which is useful beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and usage. Every sentence adds value; no superfluous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple, has output schema, and annotations. Description covers enough context for an agent to use it appropriately for querying API docs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds the key behavior: omitting 'operation' lists all supported operations. This provides meaning beyond the schema's parameter description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves Cannon Studio API operations docs, request fields, examples, models, or output shapes. The verb 'get' and resource are explicit, and it distinguishes from sibling tools which handle status, creation, cost, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this when a developer asks about Cannon Studio API operations...' and provides a usage hint: 'Omit operation to list every supported operation.' This gives clear context for when to use and how to invoke.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_checkout_linkGet Cannon Studio Checkout LinkARead-onlyIdempotentInspect
Return the first-party Cannon Studio checkout or inquiry URL for a selected offering. Public read-only: no auth, no state changes, no charges; use list_offerings first to get a valid product_key.
| Name | Required | Description | Default |
|---|---|---|---|
| product_key | Yes | Offering id returned by list_offerings, such as subscription:creator:month or credits:2500. |
Output Schema
| Name | Required | Description |
|---|---|---|
| safety | Yes | |
| nextStep | Yes | |
| offering | Yes | |
| productKey | Yes | |
| checkoutUrl | Yes | |
| chargeStatus | Yes | |
| createsStripeSession | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false, establishing a safe read operation. The description adds clarity by confirming it does not charge, collect payment, grant credits, or create a Stripe session, which prevents the agent from assuming any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are front-loaded with the most important usage instruction, followed by a clear statement of what the tool does not do. Every sentence adds value, and there is no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter, read-only tool with full annotation coverage and an output schema, the description covers all necessary aspects: usage context, behavioral boundaries, and parameter origin. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers the parameter 'product_key' with a description that includes examples. The tool description adds context by specifying that this parameter comes from list_offerings, which helps the agent understand the data flow and relationship to sibling tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a checkout URL for Cannon Studio plans, credit packs, or team inquiries, and explicitly distinguishes it from payment or charging actions. It specifies the prerequisite (use after list_offerings) and the output nature (URL only), making its purpose very clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the tool: 'Use this after list_offerings when a user chooses a Cannon Studio plan, credit pack, or team inquiry path.' This provides clear context and avoids misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_generation_requestPoll and Sync Cannon Studio Generation RequestBInspect
Poll and sync an existing Cannon Studio generation request by id. Requires OAuth or a developer API key; not a pure read because it may update lastPolledAt, sync downstream task state, update logs, and deliver one pending terminal webhook. It does not create work, spend credits, cancel jobs, delete data, or change assets. Poll sparingly using poll_after_ms or 10-30 second intervals.
| Name | Required | Description | Default |
|---|---|---|---|
| request_id | Yes | Cannon Studio request id returned by create_generation_request or POST /api/v1/requests. This is not a provider task id. | |
| include_logs | No | Set true only when the user explicitly asks to inspect retained request logs for this request. |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | No | |
| error | No | |
| status | No | |
| response | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies a read operation ('poll'), but the annotations set readOnlyHint=false, indicating the tool may not be read-only. This is a direct contradiction. No further behavioral details (e.g., side effects, latency, rate limits) are provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that includes both the prerequisite condition and the main action. It is concise and front-loaded with the key information, with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description does not need to detail return values. However, it does not explain asynchronous behavior, error handling, or how to interpret results when polling is incomplete. This is adequate but leaves gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50%. The include_logs parameter has a clear, context-rich description explaining when to set it true. The request_id parameter lacks a description, but its purpose is clear from the name. Overall, adds some value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (poll, i.e., retrieve) and the resource (generation request by id). It also specifies the prerequisite authentication context, distinguishing this tool from others like create_generation_request or list_capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: only when a Cannon Studio OAuth token or developer API key is present and the user wants to poll by id. This is strong guidance, though it does not mention explicit alternatives for other scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_model_availabilityGet Cannon Studio Model AvailabilityARead-onlyIdempotentInspect
List public Cannon Studio model availability by product surface. Public read-only: no auth, no state changes, no charges; model availability is surface-specific and does not guarantee account eligibility or remaining credits.
| Name | Required | Description | Default |
|---|---|---|---|
| surface | No | Optional surface filter, such as image tools, video tools, Creator Flow, World Generator, image-api, video-api, or three-d-api. |
Output Schema
| Name | Required | Description |
|---|---|---|
| note | Yes | |
| surfaces | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already include readOnlyHint=true and idempotentHint=true. The description adds that the tool queries across multiple surfaces (tool, Creator Flow, World Generator, API). No behavioral contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that conveys purpose and usage without any filler words. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one optional parameter and an output schema (not shown but exists), the description adequately covers when to use and what it returns. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description does not add new meaning to the single optional 'surface' parameter beyond what the schema description already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns availability of image, video, or 3D models across multiple surfaces. It uses a specific verb ('get') and resource ('model availability'), and distinguishes itself from siblings by focusing on exposed models.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool: 'when a user asks which... models... exposes across... surfaces.' It provides clear context, though it does not explicitly mention when not to use it or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricing_contextGet Cannon Studio Pricing ContextARead-onlyIdempotentInspect
Explain public Cannon Studio pricing, credits, plans, and usage tradeoffs. Public read-only: no auth, no state changes, no charges; use list_offerings or get_checkout_link only when the user asks for available purchase paths.
| Name | Required | Description | Default |
|---|---|---|---|
| plan | No | Optional plan or tier name the user mentioned, such as free, hobbyist, creator, pro, team, or enterprise. | |
| use_case | No | Optional workload or scenario to price, such as UGC ads, AI video, 3D generation, narration, team workflows, or developer API automation. | |
| media_type | No | Optional media category, such as image, video, 3D, audio, narration, subtitles, lip sync, or post-production. |
Output Schema
| Name | Required | Description |
|---|---|---|
| plan | Yes | |
| sources | Yes | |
| summary | Yes | |
| use_case | Yes | |
| media_type | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds context beyond annotations (readOnlyHint, idempotentHint, destructiveHint) by stating it returns 'public pricing guidance and canonical sources'. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences covering purpose and usage, no filler. Perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Output schema exists so return values are covered. Parameters are optional but not explained; however, for a pricing context tool with simple parameters, completeness is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Three parameters (plan, use_case, media_type) have no description in schema or in tool description. With 0% schema coverage, description fails to compensate, leaving agents uninformed about parameter meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns public pricing guidance for plans, credits, subscriptions, etc. It distinguishes from siblings like estimate_generation_cost and get_checkout_link.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'use this when a user asks about plans, credits, subscriptions...' providing clear context. No mention of when not to use, but alternatives are present in sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_capabilitiesList Cannon Studio CapabilitiesARead-onlyIdempotentInspect
List public Cannon Studio capabilities for an audience, workflow, or output type. Public read-only: no auth, no state changes, no charges; use search or fetch when the user needs deeper source text.
| Name | Required | Description | Default |
|---|---|---|---|
| audience | No | Optional persona or buyer filter, such as creators, agencies, marketing teams, filmmakers, developers, or teams. | |
| workflow | No | Optional workflow filter, such as UGC ads, Creator Flow, World Generator, API automation, 3D generation, audio, or post-production. | |
| output_type | No | Optional desired output format, such as image, video, 3D model, 3D location, narration, music, subtitles, or lip sync. |
Output Schema
| Name | Required | Description |
|---|---|---|
| stats | Yes | |
| matches | Yes | |
| summary | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent behavior. The description adds that it returns 'public capability context and relevant source links,' which is informative beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler, directly to the point. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description is adequate. It covers purpose, usage triggers, and output type. Minor gap: could mention that capabilities are static or subject to change.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, the description partially compensates by mentioning 'workflow, audience, team, or output format' but does not detail each parameter's type or role. It adds some meaning but could be more precise.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists Cannon Studio capabilities and specifies triggers like workflow, audience, team, or output format. However, it does not explicitly differentiate from sibling tools like 'list_offerings' or 'recommend_workflow'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this when a user asks what Cannon Studio can do...' providing clear guidance on when to invoke. It lacks explicit exclusions or alternative tool mentions, but the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_offeringsList Cannon Studio OfferingsARead-onlyIdempotentInspect
List public Cannon Studio plans, credit packs, and team offerings. Public read-only: no auth, no state changes, no charges; returns first-party checkout or inquiry URLs without creating Stripe sessions or granting credits.
| Name | Required | Description | Default |
|---|---|---|---|
| kind | No | Optional offering kind filter: free, subscription, credit_pack, or team. | |
| interval | No | Optional subscription interval filter: month or year. | |
| include_checkout_links | No | Set false to omit checkout URLs from the response. |
Output Schema
| Name | Required | Description |
|---|---|---|
| notes | Yes | |
| offerings | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark readOnlyHint and idempotentHint as true. The description adds that it returns public self-serve offerings and safe first-party links without payment session creation, providing useful behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words. The first sentence immediately conveys the primary usage scenario, and the second adds safety and return-value context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters with 100% schema coverage, an output schema, and annotations providing safety hints, the description sufficiently explains purpose, usage, and behavior. No gaps identified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all 3 parameters (kind, interval, include_checkout_links). The tool description does not add additional parameter-level details, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states when to use the tool ('when a user asks which Cannon Studio plans, credit packs, or team offerings are available') and what it returns, clearly distinguishing it from sibling tools like 'get_checkout_link' or 'create_generation_request'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use the tool and highlights that it does not create a payment session, implying safe usage. However, it does not explicitly mention when not to use it or compare directly to alternatives beyond the implicit distinction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recommend_workflowRecommend Cannon Studio WorkflowARead-onlyIdempotentInspect
Recommend a Cannon Studio workflow for a stated creative or developer goal. Public read-only: no auth, no state changes, no charges; use this for planning, not to create generation jobs.
| Name | Required | Description | Default |
|---|---|---|---|
| goal | Yes | User's desired outcome or problem to solve, such as producing UGC ads, planning a short film, generating 3D assets, or automating API media generation. | |
| audience | No | Optional user or organization type, such as solo creator, agency, brand team, developer, filmmaker, or enterprise team. | |
| team_size | No | Optional team context, such as solo, small team, agency team, or enterprise team; used to bias collaboration and review recommendations. | |
| output_type | No | Optional final output target, such as image, video, ad, trailer, 3D model, 3D location, audio, subtitles, or API integration. | |
| budget_sensitivity | No | Optional cost posture, such as low, medium, high, cost-sensitive, or speed-prioritized; used to frame pricing and iteration tradeoffs. |
Output Schema
| Name | Required | Description |
|---|---|---|
| goal | Yes | |
| steps | Yes | |
| sources | Yes | |
| recommendation | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and idempotentHint true, indicating a safe, idempotent operation. The description adds minimal behavioral insight (e.g., recommending a 'path') but does not contradict annotations or provide deeper context like whether results vary by input.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that conveys the main usage scenario efficiently. It wastes no words, though it could expand slightly on parameters without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters and an output schema, the description lacks completeness. It does not explain the role of any parameter, how the recommendation is determined, or what to expect from the output despite the schema existing. The tool's complexity demands more context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should compensate by explaining parameters. However, it only gives example values for outputs (Creator Flow, etc.) and does not clarify what 'goal', 'audience', 'team_size', 'output_type', or 'budget_sensitivity' mean. This leaves the agent with little guidance beyond parameter names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: recommending a Cannon Studio workflow based on a creative goal. It lists example paths (Creator Flow, World Generator, tool hubs, teams, API automation), making the tool's function specific and distinct from siblings like compare_alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this when a prospect describes a creative goal and needs the best Cannon Studio path,' providing a clear scenario. It does not enumerate exclude cases or directly name alternatives, but the sibling list implies other tools exist for different situations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchSearch Cannon Studio KnowledgeARead-onlyIdempotentInspect
Search public Cannon Studio knowledge when a user asks about products, workflows, pricing, models, comparisons, use cases, or developer API docs. Public read-only: no auth, no state changes, no charges; call fetch with a returned id when full source-backed text is needed.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results. Defaults to 8 and caps at 20. | |
| query | Yes | Natural-language query to search Cannon Studio public knowledge. | |
| audience | No | Optional audience/persona filter such as creators, agencies, marketing teams, or developers. | |
| category | No | Optional category filter such as pricing, developer, comparison, model_availability, use_case, tools, or creator_flow. | |
| pain_point | No | Optional pain point filter such as consistent characters, cost predictability, team review, model choice, or finishing. |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark the tool as read-only and non-destructive. The description adds that it returns 'citation-friendly public records' and advises using 'fetch' for full text, which provides useful behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with purpose and usage, with no fluff. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With an output schema available (context signal indicates true), the description adequately covers purpose, usage, and follow-up action. It could mention filtering options, but the input schema compensates.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all 5 parameters, so the baseline is 3. The description does not add additional parameter semantics beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches 'Cannon Studio Knowledge' and lists specific topics (Cannon Studio, AI video workflows, etc.), making the scope explicit and distinct from sibling tools like 'compare_alternatives' or 'get_model_availability'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool ('when a user asks about...') and directs to call 'fetch' for full text, providing clear usage guidance. It lacks explicit when-not-to-use or direct sibling comparison, but the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!