Skip to main content
Glama

Server Details

Create and track AI music videos and audio-reactive visuals from songs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Compellerai/compeller-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 20 of 20 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation5/5

Every tool serves a clearly distinct purpose, with detailed descriptions that differentiate similar operations like create_compel and create_compel_from_music. No two tools overlap in functionality.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern in snake_case, with clear prefixes like create_, list_, get_, search_, etc. Even compound names like create_compel_from_music and rotate_webhook_secret maintain the pattern logically.

Tool Count5/5

20 tools is entirely appropriate for a media generation platform covering compels, renderings, webhooks, media upload, search, and platform capabilities. Each tool earns its place without redundancy.

Completeness4/5

The tool set covers the main creation, status, search, rendering, and webhook lifecycle well. However, it lacks tools to update or delete a compel, which could be a gap in some workflows.

Available Tools

20 tools
create_compelBInspect

Create a Compeller generation job from primary audio media and optional reference media. Returns compel id, status, and links to track progress and retrieve renderings.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoVisual style: cinematic, performance, or abstract
titleYesTitle for the compel
aspect_ratioNoExplicit aspect ratio override: 16:9, 9:16, or 1:1
artist_contextNoAdditional creative context about the artist or song
target_platformNoTarget platform for aspect ratio: tiktok, reels, shorts, instagram, youtube
primary_media_idYesID of the uploaded audio media to use as the primary track
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are minimal (readOnlyHint=false, etc.). Description mentions return values but does not disclose key behaviors like job queuing, duration, or potential costs. Adequate but could be improved.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with key action and outputs. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters (2 required) and no output schema, the description lacks details on error handling, async nature, or the missing reference media parameter. Incomplete given tool complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters with descriptions, so baseline is 3. However, the description mentions 'optional reference media' which is not present in the input schema, creating a mismatch that reduces clarity and could mislead.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Create') and resource ('Compeller generation job'), with explicit inputs and outputs. Differentiates from sibling 'create_compel_from_music' via mention of 'primary audio media'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'create_compel_from_music'. The description does not provide context for choosing one over the other.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_compel_from_musicAInspect

Create a compel from a selected Deezer preview track. If the user provides an audio file instead, upload that file and use create_compel.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoVisual style: cinematic, performance, or abstract
titleNoOptional compel title override
track_idYesDeezer track id returned by search_music
aspect_ratioNoExplicit aspect ratio override
artist_contextNoAdditional creative context
target_platformNoTarget platform for aspect ratio
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate non-read-only and non-destructive nature. The description adds context that it uses a Deezer preview track and suggests an alternative workflow for audio files. No contradictions, but could mention more about the creation outcome.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and use case. No extraneous information, every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the core use case and alternative, but does not explain what a 'compel' is or what the tool returns. Given no output schema, this is a minor gap. However, the presence of sibling tools like 'get_compel' and 'list_compels' partially compensates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so descriptions for each parameter are already present. The tool description does not add further meaning to parameters beyond the context of using a Deezer track, which aligns with the schema description of track_id. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a compel from a Deezer preview track, and distinguishes it from the sibling 'create_compel' for audio files. The verb 'create' and specific resource 'compel from Deezer preview track' are precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use this tool (Deezer preview track) and when to use the alternative 'create_compel' (user-provided audio file), providing clear decision guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_webhookA
Destructive
Inspect

Delete a webhook endpoint by id. Removes the registration and stops future deliveries.

ParametersJSON Schema
NameRequiredDescriptionDefault
webhook_idYesThe webhook id returned by register_webhook or list_webhooks
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate destructiveHint: true. The description adds that it 'stops future deliveries', providing useful behavioral context beyond what annotations offer. It aligns with the destructive nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences without redundancy. Every word adds value, and the description is front-loaded with the action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with one parameter and no output schema. The description fully covers purpose, effect, and required input, leaving no gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for webhook_id. The description only adds 'by id', which doesn't meaningfully extend beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Delete', the resource 'webhook endpoint by id', and the effect 'Removes the registration and stops future deliveries'. It uniquely identifies this tool among siblings like register_webhook, update_webhook, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have a webhook id to delete, but lacks explicit context about when to use versus alternatives like update_webhook or when not to use. No exclusion or prerequisite guidance beyond the id requirement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_capabilitiesA
Read-only
Inspect

Get Compeller platform capabilities, supported features, and API version.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context beyond the readOnlyHint annotation by listing what is returned (capabilities, features, API version). Although it omits details like caching or error behavior, the read-only nature is clear and consistent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that starts with the verb 'Get' and immediately conveys the purpose. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (no parameters, no output schema), the description sufficiently conveys what the tool returns. It is complete for the agent to understand its function.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, the schema is fully covered (100%). The description does not need to add parameter semantics, and baseline for 0 parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves platform capabilities, supported features, and API version. It uses the verb 'Get' and specifies the resource, distinguishing it from other tools like get_compel or get_rendering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, typical use cases, or any conditions for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_compelA
Read-only
Inspect

Get the current status, progress, and details of a Compeller generation job.

ParametersJSON Schema
NameRequiredDescriptionDefault
compel_idYesThe compel ID to check
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates a read operation, consistent with the readOnlyHint annotation. However, it adds no extra behavioral details beyond what the annotation already conveys, such as rate limits or result size constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no unnecessary words. It efficiently conveys the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with one parameter and no output schema, the description adequately specifies the return content (status, progress, details). The context is complete given the tool's low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the sole parameter 'compel_id', and the description adds no additional meaning beyond what is already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Get' and resource 'Compeller generation job', clearly stating it retrieves status, progress, and details. This distinguishes it from sibling tools like list_compels (listing all jobs) or create_compel (creating jobs).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives such as list_compels or search_compels, which might also provide status information. There are no when-not-to-use or prerequisite conditions stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pricingA
Read-only
Inspect

Get current Compeller subscription plans and pricing.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true. The description merely restates the purpose without adding extra behavioral context beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with verb and resource, no wasted words. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with no inputs and no output schema, the description fully conveys what the tool does. No missing information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With no parameters, the baseline is 4. The description adds meaning by specifying the type of information retrieved (subscription plans and pricing), which is not apparent from the empty input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'Compeller subscription plans and pricing.' It distinguishes from sibling tools, which focus on rendering, webhooks, media, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when or when not to use this tool versus alternatives. However, given its simplicity and uniqueness among siblings, usage context is implicitly clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_renderingA
Read-only
Inspect

Get details of a specific video rendering.

ParametersJSON Schema
NameRequiredDescriptionDefault
rendering_idYesThe rendering ID to retrieve
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the read-only behavior is covered. The description adds no further behavioral context (e.g., response details or constraints), but does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no redundant information. It is appropriately concise for a simple retrieval tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (one parameter, read-only) and no output schema, the description sufficiently explains the tool's purpose and parameter. The missing output details are acceptable for a straightforward retrieval operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the parameter 'rendering_id' is adequately described. The description does not add additional meaning beyond what the schema provides, landing at the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get details of a specific video rendering', which is a specific verb+resource combination. It distinguishes from siblings like list_renderings (list all) and start_render (create).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It does not mention conditions or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_compelsA
Read-only
Inspect

List all Compeller generation jobs for the authenticated account, newest first.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (default 20, max 100)
offsetNoPagination offset
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the safety profile is clear. The description adds context: it lists all jobs for the authenticated account and orders by newest first. This goes beyond annotations without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise, front-loaded, and contains no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet the description does not explain what fields or structure the returned list contains. For a list tool with no output schema, this is a notable gap, though the sibling tools may provide context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes both parameters (limit and offset) with 100% coverage. The description adds no additional semantic meaning beyond what the schema provides, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List all Compeller generation jobs for the authenticated account, newest first.' It specifies verb (list), resource (Compeller generation jobs), scope (authenticated account), and ordering (newest first), which distinguishes it from siblings like create_compel or search_compels.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use list_compels versus alternatives such as search_compels or get_compel. It does not mention when not to use this tool or any prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_renderingsA
Read-only
Inspect

List all video renderings for a specific compel.

ParametersJSON Schema
NameRequiredDescriptionDefault
compel_idYesThe compel ID to list renderings for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context beyond the readOnlyHint annotation by specifying the resource type ('video renderings') and the constraint (specific compel). It does not detail auth needs or rate limits, but the simple read operation is adequately disclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. It efficiently conveys the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with annotations and full schema coverage, the description is complete enough. It could mention that it returns a list of renderings, but the absence of an output schema makes this less critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing a clear parameter description. The tool description adds minimal value by repeating 'specific compel', but it does not introduce new semantics or format details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (list), resource (video renderings), and scope (for a specific compel). It distinguishes from sibling tools like 'get_rendering' by indicating it lists all renderings, not a single one.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage: when you need all renderings for a given compel. However, it offers no explicit when-not or alternative tool guidance, relying on the context of sibling tool names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_stylesA
Read-only
Inspect

List available visual styles for Compeller video generation.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description is consistent with the readOnlyHint annotation, but adds no behavioral context beyond what the annotation already provides. It does not describe return format or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded and contains exactly the necessary information without any wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (no parameters, read-only), the description is adequate. It could optionally mention the return format, but it's not critical for a list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so the schema coverage is 100%. The description adds context that the styles are visual and for Compeller video generation, which is meaningful beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists available visual styles specific to Compeller video generation, using a specific verb and resource. It distinguishes itself from sibling list tools like list_compels and list_renderings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use or alternatives is provided. However, for a simple parameterless list, the usage is implied from the name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_webhooksA
Read-only
Inspect

List all webhook endpoints registered for the authenticated account. Secrets are never returned by this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true. The description adds that secrets are never returned, which is a key behavioral detail. No mention of pagination or limits, but acceptable for a simple list.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded, no wasted words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool with readOnlyHint annotation and no output schema, the description provides sufficient context: purpose, scope, and a critical behavioral constraint (secrets hidden). No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist; schema coverage is 100%. Baseline 3 applies as description adds no parameter-specific information, which is fine.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it lists all webhook endpoints for the authenticated account, distinguishing it from create/delete/update siblings. Also notes that secrets are not returned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage for listing but no explicit when-to-use or alternatives. The note about secrets being hidden provides some usage guidance but no exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_webhookAInspect

Register an HTTPS endpoint to receive signed push notifications for compel terminal events (compel.completed, compel.failed). Returns webhook_id and the HMAC-SHA256 signing secret exactly once — store the secret immediately, it is never returned again. Deliveries are signed via X-Compeller-Signature: sha256= over the raw body.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesHTTPS URL to deliver events to (max 2048 chars)
eventsNoEvent types to subscribe to. Omit or pass ["*"] for all. Known types: compel.completed, compel.failed.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate mutation (readOnlyHint=false) and unknown side effects (openWorldHint=true). The description adds critical behavioral details: secret returned exactly once, delivery signing via HMAC-SHA256, and event types. This goes beyond what annotations provide, though it could mention duplicate registration behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, no wasted words. Starts with purpose, then critical secret-once warning, then delivery signature details. Each sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 2 parameters and no output schema, the description covers return values (webhook_id, secret) and events. It explains secret handling and signing. Lacks error scenarios or validation hints, but given sibling test_webhook_delivery, it is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters. Description enhances by explaining that omitting events or passing ["*"] subscribes to all, and noting URL max length. This adds value beyond the schema's enum and type info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Register an HTTPS endpoint to receive signed push notifications for compel terminal events', specifying the verb (register), resource (HTTPS endpoint), and events. It distinguishes from siblings like list_webhooks or delete_webhook by focusing on creation and the unique secret-once behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description provides clear context for when to use this tool (to receive push notifications for compel events) and emphasizes critical behavior (store secret immediately). However, it does not explicitly compare to alternatives like polling or specify scenarios where this tool is inappropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rotate_webhook_secretA
Destructive
Inspect

Mint a new HMAC-SHA256 signing secret for a registered webhook endpoint. The previous secret is invalidated immediately — integrators must update their receiver before the next compel terminal event fires. Returns the new secret exactly once; store it on receipt.

ParametersJSON Schema
NameRequiredDescriptionDefault
webhook_idYesThe webhook id to rotate the secret on
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (destructiveHint=true), the description details the destruction ('invalidated immediately') and the one-time return of the new secret, providing critical behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two clear sentences, front-loaded with the main action, and no unnecessary words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a one-parameter tool with annotations and no output schema, the description adequately covers the effect, invalidation notice, and storage instruction. Complete enough for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage for the single parameter 'webhook_id', the description adds no additional meaning beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('mint a new signing secret') and the resource ('for a registered webhook endpoint'), effectively distinguishing this tool from siblings like register_webhook or update_webhook.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for rotation and warns about immediate invalidation and the need for integrators to update their receiver, but does not explicitly compare to alternatives or state when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_compelsA
Read-only
Inspect

Search Compeller generation jobs by title for the authenticated account.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (default 20)
queryYesSearch query to match against compel titles
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the description's statement matches. It adds that the search is scoped to the authenticated account, which is useful but not a major behavioral disclosure beyond the annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 10 words, front-loading the core purpose with zero redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With two parameters, no output schema, and moderate complexity, the description provides minimal but adequate context. It explains the search scope but doesn't mention response format or pagination, which would help completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are fully documented. The description reinforces that 'query' matches titles but adds no extra meaning or context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb ('Search') and resource ('Compeller generation jobs by title'), clearly distinguishing it from siblings like list_compels or get_compel. It tells the agent exactly what the tool does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage via 'by title for the authenticated account' but does not explicitly state when to use this tool over alternatives like list_compels or search_media. No exclusions or comparisons are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_mediaB
Read-only
Inspect

Search and list uploaded media files for the authenticated account.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoFilter by media type: audio, image, video, or text
limitNoMax results to return (default 20, max 100)
offsetNoPagination offset
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark the tool as read-only. The description adds the context that it operates on the authenticated account's files, but does not detail pagination, return format, or other behaviors. With annotations covering safety, the description adds modest value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no wasted words. It is front-loaded with the key action and resource, making it efficient. However, it could include more useful details without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description should hint at what the search returns (e.g., metadata, URLs). It does not, and also fails to mention pagination behavior beyond schema params. For a tool with three parameters and no output schema, the description is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters have descriptions in the input schema (100% coverage). The description does not add any further detail about parameters beyond what the schema provides, so it meets the baseline but does not exceed it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action ('Search and list') and resource ('uploaded media files'), and specifies the scope ('for the authenticated account'). It distinguishes itself from sibling tools like search_compels and search_music.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., search_music for music-specific searches). It only mentions 'for the authenticated account,' which is a basic context but not comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_musicA
Read-only
Inspect

Search Deezer preview tracks by song, artist, or album. Use this when the user provides a song string but no MP3/WAV/FLAC file.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (default 10, max 20)
queryYesSong, artist, or album search query
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the description does not need to restate that. However, it adds no further behavioral details (e.g., rate limits, result format, or limitations of Deezer previews).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with the core purpose. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with clear parameters and annotations, the description covers the key usage context. However, without an output schema, mentioning the return structure (e.g., list of track objects) would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers both parameters with descriptions (100% coverage), so the description adds minimal extra meaning. It implies the query parameter through search fields but does not detail limit behavior beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (search), resource (Deezer preview tracks), and searchable fields (song, artist, album). However, it does not differentiate from the sibling tool 'search_media', which may lead to ambiguity when both are available.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states to use this tool when the user provides a song string but no MP3/WAV/FLAC file, which gives clear context. However, it does not provide alternatives or situations where this tool should not be used.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

start_renderAInspect

Start rendering a READY compel with default configuration so an agent can continue to final MP4 without opening the browser UI.

ParametersJSON Schema
NameRequiredDescriptionDefault
compel_idYesThe READY compel ID to render
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is not read-only but not destructive. The description adds no behavioral context beyond 'start rendering'—lacks info on prerequisites (e.g., compel must be READY, already in schema but not clarified further), failure modes, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single concise sentence delivers the purpose, condition, and outcome without extraneous words. All information is front-loaded and relevant.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has one parameter fully described, but lacks guidance on return value or follow-up steps. Sibling get_rendering exists, but the description doesn't hint at polling or error handling, making it minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the schema already documents compel_id as 'The READY compel ID to render'. The description repeats 'READY' but adds no new semantics beyond what the schema provides, earning a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool starts rendering a READY compel with default configuration to produce a final MP4, avoiding browser UI. It distinguishes from siblings like get_rendering or list_renderings which are for querying status or listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is used when you want to trigger rendering without manual browser interaction. However, it doesn't explicitly mention when not to use or point to alternatives like get_rendering for tracking progress.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

test_webhook_deliveryAInspect

Synchronously POST a synthetic webhook.test event to a registered endpoint. Uses the same HMAC-SHA256 signature as real deliveries, runs the standard URL safety check at delivery time, and returns {webhook_id, event_id, event_type, delivered, response_status, response_body_preview, latency_ms, error?}. Ignores the endpoint's events subscription — test delivery is always on-demand. Use this to verify your integration before relying on compel.completed / compel.failed events.

ParametersJSON Schema
NameRequiredDescriptionDefault
webhook_idYesThe webhook id to test. Must belong to the authenticated account.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description fully discloses behavior: it is a synchronous POST that performs a URL safety check, uses the same signature as real deliveries, and returns a structured response with fields like webhook_id, event_id, and error. This adds significant context beyond the annotations, which already indicate a non-read-only, non-destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at two sentences, front-loading the core action (synthetic POST) and key behaviors (signature, URL check, response structure). Every sentence provides essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, no output schema), the description comprehensively covers the return value, behavior, and purpose. It explains the response fields explicitly, compensating for the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with clear descriptions, so the baseline is 3. The main description does not add additional meaning beyond what the schema already provides for the 'webhook_id' parameter, but this is acceptable given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool sends a synthetic 'webhook.test' event via synchronous POST to a registered endpoint, using the same HMAC-SHA256 signature as real deliveries. It distinguishes itself from sibling tools like 'register_webhook' by focusing on testing rather than registration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states to use this tool to verify integration before relying on 'compel.completed / compel.failed' events. It also notes that it ignores the endpoint's events subscription, providing clear context for when to use this tool versus real event delivery.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_webhookAInspect

Update one or more mutable fields on a registered webhook endpoint: url, events, active. At least one of these must be provided. Validation mirrors register_webhook (https-only, ≤ 2048 chars, URL safety blocklist). Returns the updated endpoint (secret is never returned by this tool — use rotate_webhook_secret for that).

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoNew HTTPS URL (max 2048 chars)
activeNoToggle delivery on/off without losing the registration
eventsNoReplacement event types. Omit to leave unchanged. ["*"] or an empty filtered list resets to wildcard.
webhook_idYesThe webhook id to update
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate mutation (readOnlyHint=false) and non-destructive behavior (destructiveHint=false). The description adds validation constraints, notes that the secret is never returned, and specifies the return value. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no fluff. The first sentence states the main purpose and fields, the second covers constraints, and the third addresses return value and secret omission. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description explains the return value ('updated endpoint') and explicitly notes that the secret is never returned, filling the gap. All critical behavioral aspects are covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds significant value by stating at least one field must be provided, explaining that omitting events leaves it unchanged, and clarifying behavior for wildcard reset. This goes beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Update' and the resource 'registered webhook endpoint', and lists the three mutable fields (url, events, active). It is distinct from sibling tools like delete_webhook and rotate_webhook_secret.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly requires at least one of the mutable fields to be provided, mentions validation mirrors register_webhook, and directs users to rotate_webhook_secret for secret management. This provides clear when-to-use and when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_mediaA
Read-only
Inspect

Get upload instructions for media files (audio, images, video). Returns the upload URL and required headers. Requires API token authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoFilename for the upload
typeNoMedia type: audio, image, video, or text
mime_typeNoMIME type of the file
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description confirms read-only nature consistent with annotations. Adds that it returns upload URL and headers but does not disclose any other behavioral traits beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste, front-loaded with key information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with no output schema, the description adequately explains what the tool returns and requirements. Could add more detail on response format or usage flow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers all parameters with descriptions. Description repeats media types from schema but adds no new semantics or constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves upload instructions for media files, specifying the returned data (URL and headers). It distinguishes the tool as a prerequisite step but could more explicitly differentiate from actual upload operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions authentication requirement but lacks explicit guidance on when to use versus alternatives, no exclusions or context about prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.