Dynamoi
Server Details
Promote music on Spotify and grow YouTube channels through AI-powered Meta and Google ad campaigns.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- getDynamoi/mcp
- GitHub Stars
- 1
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 28 of 28 tools scored. Lowest: 3.4/5.
Each tool targets a specific action on a distinct resource (smart link, campaign, artist, account, etc.). Descriptions clearly differentiate overlapping functions like readiness checks and deployment status.
All tools follow 'dynamoi_verb_noun' snake_case pattern consistently, making it predictable for an agent to infer purpose from the name.
28 tools is slightly high but well-justified given the breadth of functionality: smart link management, campaign management, analytics, billing, and account operations. No redundant tools.
Covers full lifecycle for smart links and campaigns, including creation, read, update, pause/resume, analytics, and readiness checks. Minor gaps: no delete/archive for campaigns or smart links, and media asset management is limited to listing.
Available Tools
23 toolsdynamoi_create_smart_link_from_spotifyCreate Free Smart Link from SpotifyAIdempotentInspect
Use this when the user wants to create one free Dynamoi Smart Link from a Spotify album or track URL/URI, or a single starter release from a Spotify artist URL. For full-catalog artist imports or artist hub requests, prefer dynamoi_create_smart_links_from_spotify_artist. Smart Links are free: no per-link fee, no subscription requirement, and no upgrade gate. This does not create a paid ad campaign. Spotify playlist URLs are not supported today. If the Smart Link already exists, return the existing link instead of creating a duplicate. In the final answer, lead with the public URL and do not expose internal IDs unless asked.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| artistId | Yes | ||
| spotifyUrl | Yes | ||
| clientRequestId | No | ||
| customDescription | No | ||
| userIntentSummary | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint: false, idempotentHint: true, destructiveHint: false) are complemented by description details about free nature, no upgrade gate, and duplicate handling. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Compact two-sentence description plus bullet-like exclusions. Front-loaded with usage intent, but could trim redundant free-service description for AI agents.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema. Description mentions return behavior for duplicates but lacks details on return values, parameter specifics, or format enumeration differences. Some gaps remain for a creation tool with multiple parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description fails to explain 6 parameters beyond mentioning Spotify URL/URI. Parameter roles like clientRequestId, customDescription, format are not clarified, leaving large gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool creates a free Smart Link from a Spotify artist, album, or track URL/URI. It distinguishes from siblings by specifying it does not create paid campaigns and notes unsupported playlist URLs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use (create free Smart Link from Spotify) and what-not-to-use (paid ad campaigns). Also mentions deduplication behavior and unsupported playlist URLs, giving clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_create_smart_links_from_spotify_artistCreate Free Smart Links for Spotify ArtistAIdempotentInspect
Use this when the user gives a Spotify artist URL and wants Dynamoi to create, import, or refresh free Smart Links for the artist catalog and return the artist hub. If the signed-in user has no Dynamoi artist yet, omit artistId so Dynamoi can create the first artist from the Spotify artist profile. This starts the background catalog import so the user does not need to open the dashboard. Smart Links are free: no per-link fee, no subscription requirement, and no upgrade gate. This does not create a paid ad campaign. In the final answer, lead with the artist hub URL and current public Smart Link URLs; do not expose internal IDs unless asked.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| artistId | No | ||
| clientRequestId | No | ||
| spotifyArtistUrl | Yes | ||
| userIntentSummary | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool starts a background catalog import, is free, and does not create paid campaigns, aligning well with annotations (readOnlyHint=false, destructiveHint=false, openWorldHint=true). It also provides guidance on presenting results. Minor gap: not explicitly stating idempotency as hinted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (5 sentences), front-loads the core purpose, and includes essential context about cost and result presentation without unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers the main use case and result format, it omits parameter details and does not fully leverage the readable output schema. Adequate for medium complexity but leaves gaps for effective parameter supply.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should explain parameters but only indirectly references 'spotifyArtistUrl'. It fails to describe 'artistId', 'format', 'clientRequestId', and 'userIntentSummary', leaving the agent without meaning for most parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool's purpose: creating/importing/refreshing free Smart Links for an artist's catalog when given a Spotify artist URL. It distinguishes from sibling 'dynamoi_create_smart_link_from_spotify' by focusing on the entire catalog rather than a single link.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use (user provides Spotify artist URL, wants free Smart Links) and what it does not do (not a paid ad campaign). Lacks explicit mention of alternatives like the singular sibling tool, but context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_get_account_overviewGet Account OverviewARead-onlyInspect
Use this when the user explicitly asks about the signed-in Dynamoi account itself, such as who is logged in, how many organizations or artists it can access, or whether account-level platform connections exist. Always pass intent to match that explicit account question. Do not use this to confirm a specific Meta or YouTube onboarding attempt because this account-level state can span multiple artists; use dynamoi_get_platform_status for the target artist instead. Do not use this to enumerate artists one by one; use dynamoi_list_artists for that. Never use this to 'check context' before answering generic Instagram, lyrics, songwriting, or marketing-advice questions, even if Dynamoi is attached.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| intent | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive. Description adds behavioral context: should only be used for explicit account questions, not for generic context. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is front-loaded with main purpose and usage. Some redundancy in warnings, but overall clear and well-structured. Slightly longer than necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explains what the tool returns (account details, connections) sufficiently for an agent to understand. Lacks details on response format but adequate for simple query.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description adds meaning for intent parameter by listing valid values and explaining usage. Does not address format parameter. Partially compensates for lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves account overview details like logged-in user, organizations, artists, and platform connections. It distinguishes from siblings by explicitly stating not to use for artist enumeration or generic context checking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use (user asks about signed-in account) and when-not-to-use (checking context, generic questions). Names alternative tool dynamoi_list_artists for artist enumeration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_get_artist_analyticsGet Artist AnalyticsARead-onlyInspect
Use this when the user wants artist-level performance across all campaigns, including 30-day rollups or daily breakdowns. Pass granularity=DAILY when the user asks for a daily breakdown. Pass format=summary when the user wants a written rollup, a strongest-campaign verdict, or a direct answer you can relay immediately. If this tool already returned the requested strongest-campaign comparison, stop and answer instead of calling more analytics tools. For one campaign's metrics, use dynamoi_get_campaign with includeAnalytics=true.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| artistId | Yes | ||
| dateRange | No | ||
| granularity | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds behavioral context beyond annotations: it discloses that the tool can return a written rollup or strongest-campaign verdict, and instructs the agent to stop and answer if the tool already returned the requested comparison. This enriches transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each delivering essential information without redundancy. The most critical usage guidance is front-loaded, and the description avoids extraneous details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters (including a nested object), no output schema, and there are multiple sibling analytics tools, the description adequately covers when to use, parameter choices, and expected return types (rollups, daily breakdowns, written verdicts). It lacks explicit mention of default values or error states, but is sufficient for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (no descriptions in schema). The description adds meaning for 'granularity' (DAILY for daily breakdown) and 'format' (summary for written rollup/direct answer), but does not explain 'artistId' or 'dateRange' beyond what the schema provides. It partially compensates for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves 'artist-level performance across all campaigns,' using a specific verb ('get') and resource ('artist analytics'), and distinguishes from sibling dynamoi_get_campaign_analytics by specifying scope (all campaigns vs. one campaign).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('when the user wants artist-level performance'), provides parameter guidance (granularity=DAILY for daily breakdowns, format=summary for written rollups), and explicitly tells when NOT to use ('Do not use this for one campaign's metrics; use dynamoi_get_campaign_analytics instead').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_get_billingGet BillingARead-onlyInspect
Use this when the user asks about billing state, credit balance, promo limits, or whether billing is blocking launches for one artist. When polling after dynamoi_start_subscription_checkout, pass the returned onboardingAttemptId so Dynamoi ops can correlate the chat-first Checkout attempt. Do not use this for campaign analytics or platform connection troubleshooting.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| artistId | Yes | ||
| onboardingAttemptId | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description adds minimal behavioral context. It implies the operation is specific to one artist but does not disclose return format or other behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with usage conditions, no redundant words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the tool's purpose and usage scope, but fails to document the 'format' parameter or return value, which could lead to incomplete agent decisions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain the 'format' parameter or the 'artistId' parameter beyond implying an artist is needed. The enum for format is undocumented in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves billing state, credit balance, promo limits, and billing blocking for an artist. It distinguishes from sibling tools by explicitly excluding campaign analytics and platform connection troubleshooting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use and when-not-to-use guidance, but does not name alternative sibling tools for the excluded cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_get_campaignGet CampaignARead-onlyInspect
Use this when the user wants full details for one campaign, including budget, targeting, platform status, and next actions. Set includeAnalytics=true for one-campaign performance, includeDeploymentStatus=true for delivery/deployment blockers, and includeCountries=true only when the full country list is needed. Do not use this for a campaign list; use dynamoi_list_campaigns instead. After a successful launch or campaign mutation, prefer format=summary when you need a follow-up read to relay the final answer.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| campaignId | Yes | ||
| includeAnalytics | No | ||
| includeCountries | No | ||
| analyticsDateRange | No | ||
| analyticsGranularity | No | ||
| includeDeploymentStatus | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false, and the description adds value by specifying the returned fields (budget, targeting, etc.) and post-mutation behavior. However, it could further detail the response structure or any limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences) with no redundant phrases. It front-loads the core purpose, then provides exclusions and parameter guidance, making it efficient to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with no output schema, the description adequately covers the key returned fields and parameter usage. However, it could explicitly mention that the response is either JSON or summary format, which is implied but not stated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the description effectively explains the purpose of includeCountries and format parameters, providing context beyond the enum and boolean types. This helps the agent decide when to set these optional fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving full details for one campaign, including specific fields like budget, targeting, and platform status. It explicitly distinguishes itself from the sibling tool dynamoi_list_campaigns, ensuring no ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit guidance on when to use (for full details of a single campaign) and when not to (for lists). It also gives parameter-specific advice, such as using includeCountries=true only when needed and preferring format=summary after mutations, which is actionable and clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_get_campaign_readinessGet Campaign ReadinessARead-onlyInspect
Use this when the user is planning a campaign and wants to know if the proposed inputs are ready before dynamoi_launch_campaign. This validates readiness and targeting without creating a campaign. Do not use this to create or mutate campaigns.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| endDate | No | ||
| artistId | Yes | ||
| budgetType | No | ||
| spotifyUrl | No | ||
| contentType | No | ||
| budgetAmount | No | ||
| campaignType | Yes | ||
| mediaAssetIds | No | ||
| youtubeVideoId | No | ||
| locationTargets | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, but the description adds that it 'validates readiness and targeting without creating a campaign', reinforcing the non-destructive, read-only behavior beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the key use case and exclusions. No fluff or redundant wording.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description adequately covers the tool's purpose and behavior. However, the 0% param coverage means the agent may need more guidance on common inputs, but the description remains sufficient for a readiness-check tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% with 11 parameters. The description does not explain any parameter's meaning or usage, leaving the agent to rely solely on the schema, which lacks descriptions. The description fails to compensate for the lack of parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool validates campaign readiness and targeting before launch, distinguishing it from dynamoi_launch_campaign. The verb 'get' implies read-only, and the purpose is specific: checking if proposed inputs are ready.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use (when planning a campaign to know readiness) and when not to use (do not create or mutate campaigns). Also names the sibling tool dynamoi_launch_campaign as the alternative for launching.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_get_platform_statusGet Platform StatusARead-onlyInspect
Use this when the user wants to know whether Spotify, Meta, or YouTube are connected and what setup steps still block launches. When polling after dynamoi_start_meta_connection or dynamoi_start_youtube_channel_link, pass the returned onboardingAttemptId and onboardingFlow so Dynamoi ops can correlate the chat-first browser step. Do not use this for billing details; use dynamoi_get_billing when the question is about credits or subscription state. Never use this to personalize generic Instagram or marketing-advice questions.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| artistId | Yes | ||
| onboardingFlow | No | ||
| onboardingAttemptId | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the tool is safely read-only. The description adds context about what specific platforms are checked and that it about setup blockers, but does not reveal behavioral traits like latency or data source freshness. With good annotations, a 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences, each essential. The most critical information is front-loaded ('whether Spotify, Meta, or YouTube are connected'). No superfluous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description adequately covers purpose and usage context, but omits parameter explanations, which are essential for correct invocation. Without detailing the parameters, an agent may not know how to properly format the call, especially the required artistId and optional format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description fails to explain either parameter: it does not mention that 'artistId' identifies the artist or that 'format' controls output style (json vs summary). The description adds no value beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: checking connection status (Spotify, Meta, YouTube) and setup blockers for launches. It uses specific verbs ('know whether...connected...what setup steps...block') and distinguishes from siblings like dynamoi_get_billing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (user asks about platform connections and setup steps), when not to use (billing questions), and names the alternative tool (dynamoi_get_billing). It also adds a negative instruction ('never use to personalize...').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_get_smart_linkGet Smart LinkARead-onlyInspect
Use this when the user wants full details for one free Smart Link, including release, Spotify URL, public play.dynamoi.com URL, current status, theme source, and next actions. Add include=['analytics'] for visit/click analytics and include=['artist_settings'] for artist-level theme/pixel settings. In the final answer, lead with the public URL and do not expose internal IDs unless asked.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| include | No | ||
| artistId | No | ||
| dateRange | No | ||
| playLinkId | No | ||
| spotifyUrl | No | ||
| granularity | No | ||
| includeBreakdowns | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, destructiveHint=false, and the description adds value by specifying the output fields (release, Spotify URL, etc.). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that includes usage context and output details. It is concise but lacks structure for parameter guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains what the tool returns but does not clarify how to identify the smart link via parameters. With no output schema, it partially covers completeness but misses parameter usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 4 parameters with 0% description coverage. The tool description does not mention any parameters or their roles, failing to compensate for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full details for one free Smart Link, listing specific fields. It distinguishes from siblings like list or analytics tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this when the user wants full details for one free Smart Link,' providing clear context. No explicit when-not-to-use or alternatives, but the sibling list implies alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_launch_campaignStart Campaign Launch WorkflowAIdempotentInspect
Use this when the user explicitly wants to create a new Smart Campaign or YouTube Campaign and start the launch workflow with provided details. Ads are not necessarily live until the returned delivery state is ACTIVE. For review or demo Smart Campaign launches that already specify the artist, content title, budget, countries, and reusable media assets, you may omit spotifyUrl and endDate because Dynamoi can infer reviewer-safe defaults. Do not invent placeholder spotifyUrl or endDate values for those review/demo launches; omit them and let Dynamoi infer them. After a successful launch, answer from the returned campaign details directly instead of chaining more tools unless the user explicitly asked for more. Do not use this for recommendations or previews; this creates a real campaign workflow or demo-safe simulated campaign.
| Name | Required | Description | Default |
|---|---|---|---|
| adCopy | No | ||
| endDate | No | ||
| artistId | Yes | ||
| budgetType | Yes | ||
| spotifyUrl | No | ||
| contentType | Yes | ||
| budgetAmount | Yes | ||
| budgetSplits | Yes | ||
| campaignType | Yes | ||
| contentTitle | Yes | ||
| appleMusicUrl | No | ||
| mediaAssetIds | No | ||
| youtubeVideoId | No | ||
| clientRequestId | Yes | ||
| locationTargets | No | ||
| userIntentSummary | No | ||
| useAiGeneratedCopy | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that ads are not live until delivery state is ACTIVE, and notes it creates real or demo-safe simulated campaigns. Annotations already indicate idempotentHint=true, and the description does not contradict but adds context about post-launch behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately front-loaded with purpose and usage rules, but includes several sentences of guidance that could be tightened. Overall, it is reasonably concise for the amount of behavioral detail provided.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high parameter count, nested objects, and lack of output schema, the description is incomplete. It does not explain return values, error states, or parameter details beyond a few scenarios, leaving substantial gaps for the AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description only clarifies spotifyUrl and endDate usage in review/demo scenarios. It does not explain the majority of the 17 parameters (e.g., adCopy, appleMusicUrl, userIntentSummary), leaving significant ambiguity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for creating a new Smart Campaign or YouTube Campaign and starting the launch workflow. It specifies the exact verb and resource, and distinguishes between campaign types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use and when-not-to-use guidance. It advises against using for recommendations/previews, gives specific scenarios for omitting spotifyUrl/endDate, and instructs not to chain tools unnecessarily after launch.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_list_artistsList ArtistsARead-onlyInspect
Use this when the user wants to see which artists or YouTube channels they manage, along with billing status, active campaign count, and their role. Pass artistId when you need the full profile/readiness details for one artist instead of a roster page. Do not use this for campaign details; use dynamoi_list_campaigns or dynamoi_get_campaign. Never use this for generic social-media or marketing advice, including Instagram follower-growth questions, unless the user explicitly asked about their Dynamoi roster. If the result is empty, the user is brand-new — do not stop with 'no records found'; instead route via dynamoi_get_account_overview.recommendedNextActions or read dynamoi://playbooks/onboarding-tree.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| cursor | No | ||
| format | No | ||
| artistId | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations show read-only, non-destructive. Description adds context about returned fields (billing, campaign count, role) beyond annotation safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with core purpose, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While purpose and usage are clear, missing parameter descriptions and output format details leave the agent guessing about pagination and formatting options.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No explanation of any of the three parameters (limit, cursor, format). Schema coverage is 0%, and description provides no guidance on how to use them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool lists artists/YouTube channels with billing status, campaign count, and role. Distinguishes from campaign-related siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (view artists) and when not to (campaigns, generic advice), with direct sibling references.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_list_available_countriesList Available CountriesARead-onlyInspect
Use this when the user asks which countries they can target for a Smart Campaign or YouTube campaign. Always pass campaignType because Smart Campaign and YouTube country catalogs are different. Do not use this for generic country marketing advice.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| cursor | No | ||
| format | No | ||
| campaignType | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. The description adds key behavioral context: the country list depends on campaignType, which is not implied by annotations alone. This helps the agent understand that results vary by parameter value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with critical usage guidance, and contains no extraneous information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers core usage and the output schema exists to document return values, it fails to explain optional parameters like pagination (cursor, limit) or output format (format). For a 5-parameter tool, this leaves the agent partially uninformed about full capabilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should compensate but only mentions campaignType (the required parameter). It does not explain limit, query, cursor, or format, leaving the agent to infer their meaning from names and schema constraints. This is a significant gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: listing countries for targeting in Smart Campaign or YouTube campaigns. It uses specific verbs ('list') and resources ('available countries'), and differentiates from generic country queries, leaving no ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use ('when the user asks which countries they can target') and when not to use ('Do not use this for generic country marketing advice'). It also provides a crucial usage requirement: 'Always pass campaignType because Smart Campaign and YouTube country catalogs are different.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_list_campaignsList CampaignsARead-onlyInspect
Use this when the user wants to browse campaigns for one artist, optionally filtered by type or status. Do not use this for a single campaign deep dive; use dynamoi_get_campaign for that. Never use this to personalize generic marketing advice. If the user has no artists yet, do not call this — route via dynamoi_get_account_overview first.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| cursor | No | ||
| format | No | ||
| status | No | ||
| artistId | Yes | ||
| campaignType | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the usage is safe. The description adds little behavioral context beyond 'browse', such as pagination behavior or return format. With good annotations, the burden on description is lower; a 3 is appropriate as it adds minimal additional insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with every sentence earning its place. First sentence states purpose and scope, second provides exclusion and alternative, third adds a behavioral prohibition. No extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 6 parameters (including pagination and format), no output schema, and moderate complexity, the description could be more complete by explaining pagination (limit, cursor) and output format (format parameter). It covers the essential usage but leaves gaps for practical invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions filtering by 'type' and 'status', which maps to two parameters (campaignType, status), but leaves other parameters like limit, cursor, format, and artistId unexplained. This is insufficient for complete parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the primary action (browse campaigns) and the resource (one artist's campaigns), explicitly distinguishes from the sibling tool 'dynamoi_get_campaign' for deep dives, and mentions optional filtering by type or status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use guidance ('when the user wants to browse campaigns') and when-not-to-use ('do not use for a single campaign deep dive' with alternative tool named). It also includes a prohibitive statement ('never use this to personalize generic marketing advice').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_list_media_assetsList Media AssetsARead-onlyInspect
Use this when the user wants to choose from uploaded images or videos that can be reused in a campaign launch. Do not use this when the user only wants campaign status or analytics. Use format=json when you need asset IDs for a follow-up launch. Request includeUrls only when the assistant must display or inspect public-safe asset URLs.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| cursor | No | ||
| format | No | ||
| artistId | Yes | ||
| includeUrls | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark readOnlyHint true and destructiveHint false; description adds context on reuse and public-safe URLs, though lacks pagination behavior details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three succinct sentences, front-loaded with primary usage, no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description covers key usage and parameter guidance; slight gap on response structure but sufficient for agent decision.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, description adds value for format and includeUrls but omits guidance for limit, cursor, and artistId semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists media assets for campaign launch use, distinguishing from siblings dealing with campaigns or analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly specifies when to use (user wants to choose media assets) and when not (campaign status/analytics), plus parameter guidance for format and includeUrls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_list_smart_linksList Smart LinksARead-onlyInspect
Use this when the user wants to list free Smart Links for one artist, including release title, public URL, publish state, claim state, render state, and theme. Do not use this for paid campaign lists; use dynamoi_list_campaigns for campaigns. In the final answer, show public URLs and avoid internal IDs unless asked. If empty for an artist with connected Spotify, suggest dynamoi_create_smart_links_from_spotify_artist for catalog import or dynamoi_create_smart_link_from_spotify for one release instead of stopping at 'no Smart Links yet'.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| cursor | No | ||
| format | No | ||
| artistId | Yes | ||
| claimStatus | No | ||
| renderState | No | ||
| publishState | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description does not need to reiterate safety. The description adds context about 'free' Smart Links and the fields included, but does not detail pagination or rate limits. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. Essential information is front-loaded. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 8-parameter list tool with no output schema, the description covers the main purpose, key fields, and provides sibling guidance. Missing pagination details and output format, but sufficient for a competent agent given annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions some parameter-related fields (publish state, claim state, render state) but does not explain limit, query, cursor, format, or artistId. Partial coverage; additional parameter guidance would improve usability.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists free Smart Links for one artist, specifies included fields, and explicitly distinguishes from sibling tool dynamoi_list_campaigns. The verb 'list' and resource 'Smart Links' are precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use ('when the user wants to list free Smart Links for one artist') and when-not-to-use with alternative ('Do not use this for paid campaign lists; use dynamoi_list_campaigns for campaigns').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_searchSearch DynamoiARead-onlyInspect
Use this when the user mentions an artist, release, campaign, or smart link but you do not yet know the exact record to inspect. Do not use this for analytics summaries or billing questions once you already know the target record. If the result is empty for a brand-new user (no artists yet), do not respond 'no records found' as a terminal answer — instead suggest creating their first artist hub via dynamoi_create_smart_links_from_spotify_artist or read dynamoi://playbooks/onboarding-tree.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | ||
| limit | No | ||
| query | No | ||
| cursor | No | ||
| format | No | ||
| artistId | No | ||
| includeArchived | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so agent knows it's safe. Description adds context about search scope but doesn't elaborate on pagination or side effects, which is acceptable given annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and exclusion criteria. No wasted words; efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 7 parameters and no output schema. Description does not explain what the return value looks like (e.g., list of matches, summary format). Given complexity, a brief mention of output structure would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 7 parameters with 0% description coverage. Description only mentions the 'type' enum implicitly (artist, release, campaign, smart link), but provides no detail on query, limit, cursor, format, artistId, or includeArchived.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a search tool for when the user mentions an artist, release, campaign, or smart link but the exact record is unknown. It contrasts with sibling tools that require a known record (e.g., getters).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use and when not to use (not for analytics or billing once record is known), providing clear context for selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_start_meta_connectionStart Meta ConnectionAIdempotentInspect
Use this when the user is ready to connect Meta for Spotify Smart Campaigns from chat. This returns a signed Meta OAuth URL and may send the user through a Page/Instagram selection step before the chat-first return page. After the user returns, poll dynamoi_get_platform_status with the returned onboardingAttemptId and onboardingFlow=meta until platforms.meta.status is oauth_complete, partnership_pending, or partnership_active.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| artistId | Yes | ||
| userIntentSummary | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key behaviors beyond annotations: returns a signed OAuth URL, may go through a Page/Instagram selection step, and requires subsequent polling. Consistent with readOnlyHint=false and openWorldHint=true. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that front-load the purpose and include necessary polling details. Slightly verbose but still efficient for the complexity. Could be more concise, but it's well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers the main flow and next steps, and output schema exists (so return values are handled). However, lacks explanation of error scenarios or what the immediate return contains beyond the OAuth URL. Adequate for an OAuth initiation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description gives no meaning to the parameters (artistId, format, userIntentSummary). It mentions polling parameters (onboardingAttemptId, onboardingFlow) but not input parameters. The description fails to compensate for missing schema docs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action: 'Use this when the user is ready to connect Meta for Spotify Smart Campaigns from chat.' It explains the core function (returns a signed Meta OAuth URL) and distinguishes it from sibling tools that handle other aspects of campaigns or links.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when to use (user ready to connect) and detailed post-use instructions: poll dynamoi_get_platform_status with specific parameters until a certain status. Does not explicitly exclude alternatives, but the polling guidance is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_start_subscription_checkoutStart Subscription CheckoutAIdempotentInspect
Use this when the user is ready to activate Dynamoi managed advertising billing for one artist. This creates or reuses a secure Stripe Checkout URL that the user can open from chat. Checkout returns to a Dynamoi page that tells the user to come back to the AI assistant; after that, poll dynamoi_get_billing with the returned onboardingAttemptId to confirm billing became active. Do not use this for billing status checks; use dynamoi_get_billing.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| artistId | Yes | ||
| userIntentSummary | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate mutation (readOnlyHint=false) but describe the checkout URL creation and return flow beyond annotations. Could mention authorization or failure modes, but still adds useful context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, all essential, front-loaded with core purpose and user action. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Describes the workflow and links to dynamoi_get_billing, but does not cover output schema details or handle edge cases like existing subscriptions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain any of the three parameters (format, artistId, userIntentSummary), leaving the agent to infer their purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it is for activating Dynamoi managed advertising billing for one artist, with explicit differentiation from billing status checks by naming dynamoi_get_billing as alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (user ready to activate billing), describes the checkout flow and polling requirement, and tells when not to use (billing status checks).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_start_youtube_channel_linkStart YouTube Channel LinkAIdempotentInspect
Use this when the user is ready to link a YouTube channel to one Dynamoi artist from chat. This returns a Google OAuth URL bound to the signed-in user and artist. Google returns to a Dynamoi page that tells the user to come back to the AI assistant; after that, poll dynamoi_get_platform_status with the returned onboardingAttemptId and onboardingFlow=youtube until platforms.youtube.connected is true.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | ||
| artistId | Yes | ||
| userIntentSummary | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes the OAuth flow and polling behavior. Annotations already indicate idempotentHint=true and no destructiveness, so the description adds useful process context without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently cover purpose, usage, and follow-up. Could be slightly more structured (e.g., bullet for polling), but overall clear and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Explains the overall OAuth flow and polling, which is good for a complex tool. However, missing parameter details limit completeness, though output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description does not explain any parameters. The required artistId is implied but not detailed, and optional parameters like format and userIntentSummary are ignored.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a Google OAuth URL for linking a YouTube channel to a Dynamoi artist. It specifies the context and differentiates from siblings like dynamoi_start_meta_connection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use ('when the user is ready to link a YouTube channel') and provides follow-up steps (polling dynamoi_get_platform_status). No misuse conditions needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_update_campaignUpdate CampaignADestructiveIdempotentInspect
Use this when the user explicitly wants to pause, resume, or update the budget/end date for an existing campaign. Set action to pause, resume, or update_budget. Do not use this for inspection-only questions; this changes live campaign workflow state or external campaign settings.
| Name | Required | Description | Default |
|---|---|---|---|
| action | Yes | ||
| endDate | No | ||
| campaignId | Yes | ||
| budgetAmount | No | ||
| clientRequestId | No | ||
| userIntentSummary | No | ||
| expectedCurrentStatus | No | ||
| expectedCurrentEndDate | No | ||
| expectedCurrentBudgetAmount | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description reinforces the destructiveHint annotation by stating it changes live workflow state. While no rate limits or auth details are added, the core behavioral impact is clearly conveyed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise, front-loaded sentences with no extraneous information. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters and an output schema, the description is too brief. It lacks details on conditional parameter requirements and expected responses, though the annotations partly compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Only the 'action' parameter is explained; other important parameters like endDate, budgetAmount, and optional fields are not described. The description adds minimal value beyond the enum.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: pausing, resuming, or updating budget/end date for existing campaigns. It distinguishes it from inspection tools like get_campaign.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use (explicit user request for pause/resume/update) and when not to use (inspection-only questions), providing clear decision criteria for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dynamoi_update_smart_linkUpdate Smart LinkBIdempotentInspect
Use this when the user wants to change one Smart Link's public description, publish/unpublish the public landing page, or update artist-level Smart Link theme/pixel settings. Set action to update_description, publish, unpublish, or update_artist_settings. This updates public landing-page behavior and may queue background rendering.
| Name | Required | Description | Default |
|---|---|---|---|
| theme | No | ||
| action | Yes | ||
| artistId | No | ||
| playLinkId | No | ||
| metaPixelId | No | ||
| tiktokPixelId | No | ||
| clientRequestId | No | ||
| customDescription | No | ||
| expectedUpdatedAt | No | ||
| userIntentSummary | No | ||
| expectedPublishState | No | ||
| googleAdsConversionId | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description goes beyond annotations by noting that it updates a public landing page and queues background rendering. This adds valuable behavioral context not captured in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences are concise and front-loaded with the main purpose. No unnecessary words, but could be slightly more streamlined without losing information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lacks explanation of return values (no output schema) and does not cover most parameters. It is adequate for the primary action but incomplete for a tool with 5 parameters and no schema descriptions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description only clarifies the 'customDescription' parameter. The other four parameters (playLinkId, clientRequestId, expectedUpdatedAt, userIntentSummary) are not explained, leaving a significant gap for the agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (update) and the specific resource (Smart Link's public description). It distinguishes from a sibling tool for artist settings, but could be more specific about exactly which fields are changed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates when to use this tool (change public description) and points to a sibling for theme/pixel settings. However, it does not provide explicit when-not-to-use guidance or mention prerequisites like the existence of the smart link.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetchFetch (OpenAI Connectors)ARead-onlyInspect
OpenAI ChatGPT Deep Research / Connectors fetch contract. Given an id returned by search (formatted as 'artist:', 'campaign:', or 'smartlink:'), returns the full record for citation.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds behavioral context beyond annotations by detailing the ID format and that it returns the full record. Annotations already declare readOnlyHint=true and destructiveHint=false, indicating a safe read operation. The description does not contradict annotations and provides useful information about input constraints. It doesn't mention error handling or output specifics, but given the presence of an output schema, this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the tool's purpose and key constraints. Every sentence adds value: the first identifies the type and origin of the ID, the second states the action and output. There is no redundant or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, read-only operation, output schema available), the description covers all essential aspects: what the tool does, how to use it (id from search), the ID format, and the return value. The presence of an output schema eliminates the need to describe the return structure. The description is sufficient for an agent to correctly invoke the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has a single parameter `id` with no description (0% coverage). The description adds significant meaning by specifying that the id must be returned by `search` and formatted as 'artist:<uuid>', 'campaign:<uuid>', or 'smartlink:<uuid>'. This goes beyond the schema's type constraints (string, length limits) and tells the agent where to get the value and what formats are valid.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('returns the full record') and resource ('record identified by an id from search'). It specifies the ID format with three examples, distinguishing it from sibling get tools that are type-specific. The verb 'fetch' is appropriate for retrieving data, and the description aligns with the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to use this tool when you have an id returned by `search`, which provides clear context. It doesn't explicitly state when not to use it, but the implied workflow (search then fetch) and the existence of sibling get tools (e.g., dynamoi_get_artist) suggest alternatives. Slightly more explicit guidance on when to prefer fetch over those get tools would improve clarity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchSearch (OpenAI Connectors)ARead-onlyInspect
OpenAI ChatGPT Deep Research / Connectors search contract. Returns matching Dynamoi artists, campaigns, and Smart Links so they can be cited in a deep-research session. For regular ChatGPT chat use dynamoi_search instead.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description adds value by specifying the session context (deep-research vs. regular chat). It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences clearly convey purpose and usage differentiation with zero waste. Information is front-loaded and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, output schema exists), the description fully captures when and how to use it. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, the description should add meaning to the query parameter. It only implies that query is a search term but lacks format guidance, examples, or constraints beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns matching Dynamoi artists, campaigns, and Smart Links for deep-research sessions, and explicitly distinguishes itself from the sibling dynamoi_search tool for regular chat use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly specifies when to use this tool (deep-research sessions) and when not (regular ChatGPT chat), naming the alternative dynamoi_search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!