Skip to main content
Glama

Server Details

Promote music on Spotify and grow YouTube channels through AI-powered Meta and Google ad campaigns.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
getDynamoi/mcp
GitHub Stars
1

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 28 of 28 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a specific action on a distinct resource (smart link, campaign, artist, account, etc.). Descriptions clearly differentiate overlapping functions like readiness checks and deployment status.

Naming Consistency5/5

All tools follow 'dynamoi_verb_noun' snake_case pattern consistently, making it predictable for an agent to infer purpose from the name.

Tool Count4/5

28 tools is slightly high but well-justified given the breadth of functionality: smart link management, campaign management, analytics, billing, and account operations. No redundant tools.

Completeness4/5

Covers full lifecycle for smart links and campaigns, including creation, read, update, pause/resume, analytics, and readiness checks. Minor gaps: no delete/archive for campaigns or smart links, and media asset management is limited to listing.

Available Tools

23 tools
dynamoi_get_account_overviewGet Account OverviewA
Read-only
Inspect

Use this when the user explicitly asks about the signed-in Dynamoi account itself, such as who is logged in, how many organizations or artists it can access, or whether account-level platform connections exist. Always pass intent to match that explicit account question. Do not use this to confirm a specific Meta or YouTube onboarding attempt because this account-level state can span multiple artists; use dynamoi_get_platform_status for the target artist instead. Do not use this to enumerate artists one by one; use dynamoi_list_artists for that. Never use this to 'check context' before answering generic Instagram, lyrics, songwriting, or marketing-advice questions, even if Dynamoi is attached.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNo
intentYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and non-destructive. Description adds behavioral context: should only be used for explicit account questions, not for generic context. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with main purpose and usage. Some redundancy in warnings, but overall clear and well-structured. Slightly longer than necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description explains what the tool returns (account details, connections) sufficiently for an agent to understand. Lacks details on response format but adequate for simple query.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. Description adds meaning for intent parameter by listing valid values and explaining usage. Does not address format parameter. Partially compensates for lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves account overview details like logged-in user, organizations, artists, and platform connections. It distinguishes from siblings by explicitly stating not to use for artist enumeration or generic context checking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use (user asks about signed-in account) and when-not-to-use (checking context, generic questions). Names alternative tool dynamoi_list_artists for artist enumeration.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_get_artist_analyticsGet Artist AnalyticsA
Read-only
Inspect

Use this when the user wants artist-level performance across all campaigns, including 30-day rollups or daily breakdowns. Pass granularity=DAILY when the user asks for a daily breakdown. Pass format=summary when the user wants a written rollup, a strongest-campaign verdict, or a direct answer you can relay immediately. If this tool already returned the requested strongest-campaign comparison, stop and answer instead of calling more analytics tools. For one campaign's metrics, use dynamoi_get_campaign with includeAnalytics=true.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNo
artistIdYes
dateRangeNo
granularityNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds behavioral context beyond annotations: it discloses that the tool can return a written rollup or strongest-campaign verdict, and instructs the agent to stop and answer if the tool already returned the requested comparison. This enriches transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each delivering essential information without redundancy. The most critical usage guidance is front-loaded, and the description avoids extraneous details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters (including a nested object), no output schema, and there are multiple sibling analytics tools, the description adequately covers when to use, parameter choices, and expected return types (rollups, daily breakdowns, written verdicts). It lacks explicit mention of default values or error states, but is sufficient for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (no descriptions in schema). The description adds meaning for 'granularity' (DAILY for daily breakdown) and 'format' (summary for written rollup/direct answer), but does not explain 'artistId' or 'dateRange' beyond what the schema provides. It partially compensates for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves 'artist-level performance across all campaigns,' using a specific verb ('get') and resource ('artist analytics'), and distinguishes from sibling dynamoi_get_campaign_analytics by specifying scope (all campaigns vs. one campaign).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('when the user wants artist-level performance'), provides parameter guidance (granularity=DAILY for daily breakdowns, format=summary for written rollups), and explicitly tells when NOT to use ('Do not use this for one campaign's metrics; use dynamoi_get_campaign_analytics instead').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_get_billingGet BillingA
Read-only
Inspect

Use this when the user asks about billing state, credit balance, promo limits, or whether billing is blocking launches for one artist. When polling after dynamoi_start_subscription_checkout, pass the returned onboardingAttemptId so Dynamoi ops can correlate the chat-first Checkout attempt. Do not use this for campaign analytics or platform connection troubleshooting.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNo
artistIdYes
onboardingAttemptIdNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the description adds minimal behavioral context. It implies the operation is specific to one artist but does not disclose return format or other behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with usage conditions, no redundant words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's purpose and usage scope, but fails to document the 'format' parameter or return value, which could lead to incomplete agent decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not explain the 'format' parameter or the 'artistId' parameter beyond implying an artist is needed. The enum for format is undocumented in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves billing state, credit balance, promo limits, and billing blocking for an artist. It distinguishes from sibling tools by explicitly excluding campaign analytics and platform connection troubleshooting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use and when-not-to-use guidance, but does not name alternative sibling tools for the excluded cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_get_campaignGet CampaignA
Read-only
Inspect

Use this when the user wants full details for one campaign, including budget, targeting, platform status, and next actions. Set includeAnalytics=true for one-campaign performance, includeDeploymentStatus=true for delivery/deployment blockers, and includeCountries=true only when the full country list is needed. Do not use this for a campaign list; use dynamoi_list_campaigns instead. After a successful launch or campaign mutation, prefer format=summary when you need a follow-up read to relay the final answer.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNo
campaignIdYes
includeAnalyticsNo
includeCountriesNo
analyticsDateRangeNo
analyticsGranularityNo
includeDeploymentStatusNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and destructiveHint=false, and the description adds value by specifying the returned fields (budget, targeting, etc.) and post-mutation behavior. However, it could further detail the response structure or any limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (three sentences) with no redundant phrases. It front-loads the core purpose, then provides exclusions and parameter guidance, making it efficient to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with no output schema, the description adequately covers the key returned fields and parameter usage. However, it could explicitly mention that the response is either JSON or summary format, which is implied but not stated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description effectively explains the purpose of includeCountries and format parameters, providing context beyond the enum and boolean types. This helps the agent decide when to set these optional fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving full details for one campaign, including specific fields like budget, targeting, and platform status. It explicitly distinguishes itself from the sibling tool dynamoi_list_campaigns, ensuring no ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on when to use (for full details of a single campaign) and when not to (for lists). It also gives parameter-specific advice, such as using includeCountries=true only when needed and preferring format=summary after mutations, which is actionable and clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_get_campaign_readinessGet Campaign ReadinessA
Read-only
Inspect

Use this when the user is planning a campaign and wants to know if the proposed inputs are ready before dynamoi_launch_campaign. This validates readiness and targeting without creating a campaign. Do not use this to create or mutate campaigns.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNo
endDateNo
artistIdYes
budgetTypeNo
spotifyUrlNo
contentTypeNo
budgetAmountNo
campaignTypeYes
mediaAssetIdsNo
youtubeVideoIdNo
locationTargetsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, but the description adds that it 'validates readiness and targeting without creating a campaign', reinforcing the non-destructive, read-only behavior beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the key use case and exclusions. No fluff or redundant wording.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema, the description adequately covers the tool's purpose and behavior. However, the 0% param coverage means the agent may need more guidance on common inputs, but the description remains sufficient for a readiness-check tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% with 11 parameters. The description does not explain any parameter's meaning or usage, leaving the agent to rely solely on the schema, which lacks descriptions. The description fails to compensate for the lack of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool validates campaign readiness and targeting before launch, distinguishing it from dynamoi_launch_campaign. The verb 'get' implies read-only, and the purpose is specific: checking if proposed inputs are ready.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use (when planning a campaign to know readiness) and when not to use (do not create or mutate campaigns). Also names the sibling tool dynamoi_launch_campaign as the alternative for launching.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_get_platform_statusGet Platform StatusA
Read-only
Inspect

Use this when the user wants to know whether Spotify, Meta, or YouTube are connected and what setup steps still block launches. When polling after dynamoi_start_meta_connection or dynamoi_start_youtube_channel_link, pass the returned onboardingAttemptId and onboardingFlow so Dynamoi ops can correlate the chat-first browser step. Do not use this for billing details; use dynamoi_get_billing when the question is about credits or subscription state. Never use this to personalize generic Instagram or marketing-advice questions.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNo
artistIdYes
onboardingFlowNo
onboardingAttemptIdNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the tool is safely read-only. The description adds context about what specific platforms are checked and that it about setup blockers, but does not reveal behavioral traits like latency or data source freshness. With good annotations, a 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three short sentences, each essential. The most critical information is front-loaded ('whether Spotify, Meta, or YouTube are connected'). No superfluous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately covers purpose and usage context, but omits parameter explanations, which are essential for correct invocation. Without detailing the parameters, an agent may not know how to properly format the call, especially the required artistId and optional format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description fails to explain either parameter: it does not mention that 'artistId' identifies the artist or that 'format' controls output style (json vs summary). The description adds no value beyond the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: checking connection status (Spotify, Meta, YouTube) and setup blockers for launches. It uses specific verbs ('know whether...connected...what setup steps...block') and distinguishes from siblings like dynamoi_get_billing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (user asks about platform connections and setup steps), when not to use (billing questions), and names the alternative tool (dynamoi_get_billing). It also adds a negative instruction ('never use to personalize...').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_launch_campaignStart Campaign Launch WorkflowA
Idempotent
Inspect

Use this when the user explicitly wants to create a new Smart Campaign or YouTube Campaign and start the launch workflow with provided details. Ads are not necessarily live until the returned delivery state is ACTIVE. For review or demo Smart Campaign launches that already specify the artist, content title, budget, countries, and reusable media assets, you may omit spotifyUrl and endDate because Dynamoi can infer reviewer-safe defaults. Do not invent placeholder spotifyUrl or endDate values for those review/demo launches; omit them and let Dynamoi infer them. After a successful launch, answer from the returned campaign details directly instead of chaining more tools unless the user explicitly asked for more. Do not use this for recommendations or previews; this creates a real campaign workflow or demo-safe simulated campaign.

ParametersJSON Schema
NameRequiredDescriptionDefault
adCopyNo
endDateNo
artistIdYes
budgetTypeYes
spotifyUrlNo
contentTypeYes
budgetAmountYes
budgetSplitsYes
campaignTypeYes
contentTitleYes
appleMusicUrlNo
mediaAssetIdsNo
youtubeVideoIdNo
clientRequestIdYes
locationTargetsNo
userIntentSummaryNo
useAiGeneratedCopyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that ads are not live until delivery state is ACTIVE, and notes it creates real or demo-safe simulated campaigns. Annotations already indicate idempotentHint=true, and the description does not contradict but adds context about post-launch behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with purpose and usage rules, but includes several sentences of guidance that could be tightened. Overall, it is reasonably concise for the amount of behavioral detail provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high parameter count, nested objects, and lack of output schema, the description is incomplete. It does not explain return values, error states, or parameter details beyond a few scenarios, leaving substantial gaps for the AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description only clarifies spotifyUrl and endDate usage in review/demo scenarios. It does not explain the majority of the 17 parameters (e.g., adCopy, appleMusicUrl, userIntentSummary), leaving significant ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is for creating a new Smart Campaign or YouTube Campaign and starting the launch workflow. It specifies the exact verb and resource, and distinguishes between campaign types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use and when-not-to-use guidance. It advises against using for recommendations/previews, gives specific scenarios for omitting spotifyUrl/endDate, and instructs not to chain tools unnecessarily after launch.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_list_artistsList ArtistsA
Read-only
Inspect

Use this when the user wants to see which artists or YouTube channels they manage, along with billing status, active campaign count, and their role. Pass artistId when you need the full profile/readiness details for one artist instead of a roster page. Do not use this for campaign details; use dynamoi_list_campaigns or dynamoi_get_campaign. Never use this for generic social-media or marketing advice, including Instagram follower-growth questions, unless the user explicitly asked about their Dynamoi roster. If the result is empty, the user is brand-new — do not stop with 'no records found'; instead route via dynamoi_get_account_overview.recommendedNextActions or read dynamoi://playbooks/onboarding-tree.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
cursorNo
formatNo
artistIdNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations show read-only, non-destructive. Description adds context about returned fields (billing, campaign count, role) beyond annotation safety profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with core purpose, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While purpose and usage are clear, missing parameter descriptions and output format details leave the agent guessing about pagination and formatting options.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No explanation of any of the three parameters (limit, cursor, format). Schema coverage is 0%, and description provides no guidance on how to use them.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool lists artists/YouTube channels with billing status, campaign count, and role. Distinguishes from campaign-related siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (view artists) and when not to (campaigns, generic advice), with direct sibling references.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_list_available_countriesList Available CountriesA
Read-only
Inspect

Use this when the user asks which countries they can target for a Smart Campaign or YouTube campaign. Always pass campaignType because Smart Campaign and YouTube country catalogs are different. Do not use this for generic country marketing advice.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo
cursorNo
formatNo
campaignTypeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and non-destructive behavior. The description adds key behavioral context: the country list depends on campaignType, which is not implied by annotations alone. This helps the agent understand that results vary by parameter value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with critical usage guidance, and contains no extraneous information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description covers core usage and the output schema exists to document return values, it fails to explain optional parameters like pagination (cursor, limit) or output format (format). For a 5-parameter tool, this leaves the agent partially uninformed about full capabilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description should compensate but only mentions campaignType (the required parameter). It does not explain limit, query, cursor, or format, leaving the agent to infer their meaning from names and schema constraints. This is a significant gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: listing countries for targeting in Smart Campaign or YouTube campaigns. It uses specific verbs ('list') and resources ('available countries'), and differentiates from generic country queries, leaving no ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use ('when the user asks which countries they can target') and when not to use ('Do not use this for generic country marketing advice'). It also provides a crucial usage requirement: 'Always pass campaignType because Smart Campaign and YouTube country catalogs are different.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_list_campaignsList CampaignsA
Read-only
Inspect

Use this when the user wants to browse campaigns for one artist, optionally filtered by type or status. Do not use this for a single campaign deep dive; use dynamoi_get_campaign for that. Never use this to personalize generic marketing advice. If the user has no artists yet, do not call this — route via dynamoi_get_account_overview first.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
cursorNo
formatNo
statusNo
artistIdYes
campaignTypeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the usage is safe. The description adds little behavioral context beyond 'browse', such as pagination behavior or return format. With good annotations, the burden on description is lower; a 3 is appropriate as it adds minimal additional insight.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with every sentence earning its place. First sentence states purpose and scope, second provides exclusion and alternative, third adds a behavioral prohibition. No extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 6 parameters (including pagination and format), no output schema, and moderate complexity, the description could be more complete by explaining pagination (limit, cursor) and output format (format parameter). It covers the essential usage but leaves gaps for practical invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It mentions filtering by 'type' and 'status', which maps to two parameters (campaignType, status), but leaves other parameters like limit, cursor, format, and artistId unexplained. This is insufficient for complete parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the primary action (browse campaigns) and the resource (one artist's campaigns), explicitly distinguishes from the sibling tool 'dynamoi_get_campaign' for deep dives, and mentions optional filtering by type or status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance ('when the user wants to browse campaigns') and when-not-to-use ('do not use for a single campaign deep dive' with alternative tool named). It also includes a prohibitive statement ('never use this to personalize generic marketing advice').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_list_media_assetsList Media AssetsA
Read-only
Inspect

Use this when the user wants to choose from uploaded images or videos that can be reused in a campaign launch. Do not use this when the user only wants campaign status or analytics. Use format=json when you need asset IDs for a follow-up launch. Request includeUrls only when the assistant must display or inspect public-safe asset URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
cursorNo
formatNo
artistIdYes
includeUrlsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark readOnlyHint true and destructiveHint false; description adds context on reuse and public-safe URLs, though lacks pagination behavior details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three succinct sentences, front-loaded with primary usage, no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description covers key usage and parameter guidance; slight gap on response structure but sufficient for agent decision.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, description adds value for format and includeUrls but omits guidance for limit, cursor, and artistId semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it lists media assets for campaign launch use, distinguishing from siblings dealing with campaigns or analytics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly specifies when to use (user wants to choose media assets) and when not (campaign status/analytics), plus parameter guidance for format and includeUrls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_start_meta_connectionStart Meta ConnectionA
Idempotent
Inspect

Use this when the user is ready to connect Meta for Spotify Smart Campaigns from chat. This returns a signed Meta OAuth URL and may send the user through a Page/Instagram selection step before the chat-first return page. After the user returns, poll dynamoi_get_platform_status with the returned onboardingAttemptId and onboardingFlow=meta until platforms.meta.status is oauth_complete, partnership_pending, or partnership_active.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNo
artistIdYes
userIntentSummaryNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses key behaviors beyond annotations: returns a signed OAuth URL, may go through a Page/Instagram selection step, and requires subsequent polling. Consistent with readOnlyHint=false and openWorldHint=true. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that front-load the purpose and include necessary polling details. Slightly verbose but still efficient for the complexity. Could be more concise, but it's well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers the main flow and next steps, and output schema exists (so return values are handled). However, lacks explanation of error scenarios or what the immediate return contains beyond the OAuth URL. Adequate for an OAuth initiation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description gives no meaning to the parameters (artistId, format, userIntentSummary). It mentions polling parameters (onboardingAttemptId, onboardingFlow) but not input parameters. The description fails to compensate for missing schema docs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action: 'Use this when the user is ready to connect Meta for Spotify Smart Campaigns from chat.' It explains the core function (returns a signed Meta OAuth URL) and distinguishes it from sibling tools that handle other aspects of campaigns or links.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when to use (user ready to connect) and detailed post-use instructions: poll dynamoi_get_platform_status with specific parameters until a certain status. Does not explicitly exclude alternatives, but the polling guidance is strong.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_start_subscription_checkoutStart Subscription CheckoutA
Idempotent
Inspect

Use this when the user is ready to activate Dynamoi managed advertising billing for one artist. This creates or reuses a secure Stripe Checkout URL that the user can open from chat. Checkout returns to a Dynamoi page that tells the user to come back to the AI assistant; after that, poll dynamoi_get_billing with the returned onboardingAttemptId to confirm billing became active. Do not use this for billing status checks; use dynamoi_get_billing.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNo
artistIdYes
userIntentSummaryNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate mutation (readOnlyHint=false) but describe the checkout URL creation and return flow beyond annotations. Could mention authorization or failure modes, but still adds useful context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, all essential, front-loaded with core purpose and user action. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes the workflow and links to dynamoi_get_billing, but does not cover output schema details or handle edge cases like existing subscriptions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not explain any of the three parameters (format, artistId, userIntentSummary), leaving the agent to infer their purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it is for activating Dynamoi managed advertising billing for one artist, with explicit differentiation from billing status checks by naming dynamoi_get_billing as alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (user ready to activate billing), describes the checkout flow and polling requirement, and tells when not to use (billing status checks).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dynamoi_update_campaignUpdate CampaignA
DestructiveIdempotent
Inspect

Use this when the user explicitly wants to pause, resume, or update the budget/end date for an existing campaign. Set action to pause, resume, or update_budget. Do not use this for inspection-only questions; this changes live campaign workflow state or external campaign settings.

ParametersJSON Schema
NameRequiredDescriptionDefault
actionYes
endDateNo
campaignIdYes
budgetAmountNo
clientRequestIdNo
userIntentSummaryNo
expectedCurrentStatusNo
expectedCurrentEndDateNo
expectedCurrentBudgetAmountNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description reinforces the destructiveHint annotation by stating it changes live workflow state. While no rate limits or auth details are added, the core behavioral impact is clearly conveyed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise, front-loaded sentences with no extraneous information. Every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 9 parameters and an output schema, the description is too brief. It lacks details on conditional parameter requirements and expected responses, though the annotations partly compensate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. Only the 'action' parameter is explained; other important parameters like endDate, budgetAmount, and optional fields are not described. The description adds minimal value beyond the enum.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: pausing, resuming, or updating budget/end date for existing campaigns. It distinguishes it from inspection tools like get_campaign.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use (explicit user request for pause/resume/update) and when not to use (inspection-only questions), providing clear decision criteria for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetchFetch (OpenAI Connectors)A
Read-only
Inspect

OpenAI ChatGPT Deep Research / Connectors fetch contract. Given an id returned by search (formatted as 'artist:', 'campaign:', or 'smartlink:'), returns the full record for citation.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds behavioral context beyond annotations by detailing the ID format and that it returns the full record. Annotations already declare readOnlyHint=true and destructiveHint=false, indicating a safe read operation. The description does not contradict annotations and provides useful information about input constraints. It doesn't mention error handling or output specifics, but given the presence of an output schema, this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loaded with the tool's purpose and key constraints. Every sentence adds value: the first identifies the type and origin of the ID, the second states the action and output. There is no redundant or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, read-only operation, output schema available), the description covers all essential aspects: what the tool does, how to use it (id from search), the ID format, and the return value. The presence of an output schema eliminates the need to describe the return structure. The description is sufficient for an agent to correctly invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has a single parameter `id` with no description (0% coverage). The description adds significant meaning by specifying that the id must be returned by `search` and formatted as 'artist:<uuid>', 'campaign:<uuid>', or 'smartlink:<uuid>'. This goes beyond the schema's type constraints (string, length limits) and tells the agent where to get the value and what formats are valid.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('returns the full record') and resource ('record identified by an id from search'). It specifies the ID format with three examples, distinguishing it from sibling get tools that are type-specific. The verb 'fetch' is appropriate for retrieving data, and the description aligns with the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to use this tool when you have an id returned by `search`, which provides clear context. It doesn't explicitly state when not to use it, but the implied workflow (search then fetch) and the existence of sibling get tools (e.g., dynamoi_get_artist) suggest alternatives. Slightly more explicit guidance on when to prefer fetch over those get tools would improve clarity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.