Youfiliate Smart Links
Server Details
Create geo-targeted affiliate smart links, pull analytics, and rewrite YouTube descriptions.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- andrewmpierce/youfiliate-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 18 of 18 tools scored. Lowest: 3.5/5.
Every tool targets a distinct resource or action: smart link CRUD and health, YouTube connection and migration, preferences, and analytics. Even similar stats tools are clearly separated by aggregate vs. per-link scope.
All tools follow the consistent 'youfiliate_verb_noun' pattern in snake_case. Verbs like create, get, update, delete, list, check, connect, disconnect, preview, start, rollback are used predictably.
18 tools cover smart link management, YouTube integration, and analytics without being excessive. Each tool serves a specific purpose, and the count is appropriate for the domain's complexity.
The tool surface provides full CRUD for smart links, health checks, stats (individual and aggregate), preferences, and a complete YouTube migration lifecycle (connect, preview, start, status, list, rollback). No obvious gaps.
Available Tools
18 toolsyoufiliate_check_link_healthARead-onlyIdempotentInspect
Trigger a health check for a specific smart link.
Checks the default URL and all geo-rule URLs for availability.
Returns the health status (healthy/broken/unknown).
Rate limited to once per 5 minutes per link.
Does NOT modify the link configuration.
Common errors:
- Rate limit: wait 5 minutes between health checks for the same link.
- Smart link not found: check the ID.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, destructiveHint. The description adds important behavioral details: it does not modify config, checks default and geo-rule URLs, returns health status, and has rate limits. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured: purpose sentence, then bullet-like details, then rate limit, non-modification, and common errors. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema and presence of an output schema, the description covers all necessary context: purpose, what is checked, constraints, errors, and side-effect-free guarantee.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema already includes descriptions for 'id' (UUID pattern) and 'response_format'. The description adds meaning beyond schema by explaining what is checked (default URL and geo-rule URLs) and overall behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The title and description clearly state 'Trigger a health check for a specific smart link', with specific verb and resource. It distinguishes from siblings by focusing on health checking, while others handle creation, deletion, stats, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides rate limit info (once per 5 min) and common errors, guiding appropriate use. However, it does not explicitly compare to alternatives or state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_connect_youtubeAInspect
Initiate YouTube OAuth connection. Returns a URL the user must open in their browser.
The user must open the returned URL in their web browser to authorize
Youfiliate to access their YouTube channel. The OAuth callback is handled
in the browser — this tool only returns the authorization URL.
Does NOT read or modify any YouTube data. The OAuth flow is completed
in the user's browser.
Common errors:
- Already connected: disconnect first with `youfiliate_disconnect_youtube`.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds significant behavioral context beyond annotations. It explains that the OAuth callback happens in the user's browser, the tool only returns the URL, and it explicitly states that it does not read or modify YouTube data. This complements the annotations (readOnlyHint=false, destructiveHint=false) by clarifying the tool's side effects on the server state vs. YouTube data. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise yet comprehensive, using multiple paragraphs to structure information. The first sentence states the purpose, followed by user instructions, a clarification about data modification, and common errors. Every sentence adds value without redundancy. The length is appropriate for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and the presence of an output schema and annotations, the description is sufficiently complete. It covers the OAuth process, user actions needed, error scenarios, and the tool's limitations. The only minor gap is the lack of parameter documentation, but the parameter is optional and self-explanatory from its schema. Overall, it provides a full picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has a single parameter (response_format) with a default value and enum, but the description does not mention or explain this parameter. With schema description coverage at 0%, the description should compensate, but it fails to add any meaning beyond what the schema already provides. The parameter is simple, but the omission reduces usefulness for an agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it initiates YouTube OAuth connection and returns a URL. It uses specific verbs ('Initiate', 'returns a URL') and specifies the resource ('YouTube OAuth connection'). It distinguishes itself from sibling tools by explicitly noting that it does not read or modify YouTube data, which differentiates it from tools like youfiliate_disconnect_youtube or youfiliate_get_youtube_status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context on when to use the tool (to initiate OAuth) and includes a common error scenario ('Already connected: disconnect first with youfiliate_disconnect_youtube'). However, it does not explicitly state when not to use it or mention alternatives like checking status with youfiliate_get_youtube_status before connecting. The guidance is clear but lacks explicit exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_create_smart_linkAInspect
Create a new geo-targeted smart link with an optional custom slug.
Creates a smart link that redirects visitors to the destination URL.
Optionally configure country-specific geo rules and deep linking for
iOS/Android apps. Does NOT modify any existing links.
Returns the created smart link details including its short URL
(youfil.to/<slug>).
Common errors:
- Slug already taken: choose a different slug or omit for auto-generation.
- Plan limit reached: upgrade your plan to create more links.
- Invalid URL: ensure the destination URL is a valid HTTP/HTTPS URL.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds context beyond annotations by stating it does not modify existing links and returns created link details. Annotations already present, but description enhances understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with clear front-loading: purpose first, then explanation, return info, and common errors. No wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers creation process, return value, and common errors. However, does not explain deep link configuration details or output schema, though output schema exists. Adequate but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring description to compensate. Description only hints at geo rules and deep linking parameters without detailing them, leaving parameter semantics largely to the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it creates a geo-targeted smart link with optional custom slug, and distinguishes from siblings by explicitly noting it does NOT modify existing links.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes when to use (creating a new link) and lists common errors, but does not explicitly state alternatives or when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_delete_smart_linkADestructiveIdempotentInspect
Delete a smart link permanently. The short URL will stop working.
IMPORTANT: Always confirm with the user before executing this action.
The `confirm` parameter must be set to true. This is a destructive
action that cannot be undone — the slug becomes available for reuse
after a cooldown period.
Does NOT affect other links or YouTube descriptions.
Common errors:
- Smart link not found: check the ID.
- confirm=False: you must set confirm=True after getting user confirmation.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds concrete behavioral details beyond annotations: permanent deletion, short URL stops, slug reuse cooldown, no effect on other links. No contradiction with annotations (destructiveHint=true, idempotentHint=true).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Succinct, well-organized: single-line purpose, then critical warnings, exclusions, and error cases. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers outcome, side effects, and common errors. Output schema likely documents return values. Could mention prerequisites (e.g., access to link) but the ID requirement is in schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes both parameters (id, confirm) with clear descriptions. The tool description reinforces the confirm parameter's role and adds context about slug reuse and common errors, adding value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Delete permanently) and the resource (smart link). It uniquely identifies this tool among siblings like update_smart_link and create_smart_link.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly warns to confirm with user before deletion and that confirm parameter must be true. Provides common errors but does not explicitly contrast with alternative tools (e.g., update) for non-destructive changes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_disconnect_youtubeADestructiveIdempotentInspect
Disconnect your YouTube account from Youfiliate.
IMPORTANT: Always confirm with the user before executing this action.
The `confirm` parameter must be set to true. This removes stored OAuth
tokens. You will need to reconnect to use the auto-migration feature.
Does NOT modify any YouTube data or video descriptions.
Common errors:
- Not connected: no YouTube account to disconnect.
- confirm=False: you must set confirm=True after getting user confirmation.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds context beyond annotations: explains removal of OAuth tokens, idempotent effect, and that it does not modify YouTube data. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with paragraphs and bullet points. Every sentence provides essential information. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with one parameter, annotations cover destructive and idempotent nature, and description covers all needed context: confirmation, side effects, and errors. Output schema exists, so return values are covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes confirm parameter well. Description adds value by explaining token removal and auto-migration consequence. Schema coverage is high, so baseline is 3; extra context raises to 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Disconnect your YouTube account from Youfiliate' with a clear verb and resource. It distinguishes itself from sibling tools like connect_youtube and get_youtube_status by its specific action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit requirement to confirm with user and set confirm=true. Lists common errors, including when not connected. Could mention alternatives like reconnecting, but clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_get_aggregate_statsARead-onlyIdempotentInspect
Get aggregate click analytics across all your smart links.
Returns total clicks, top countries, devices, and referrers across your entire account for the specified period. Does NOT modify any data.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark read-only, idempotent, non-destructive. Description adds explicit statement 'Does NOT modify any data', reinforcing behavioral traits without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very concise, bulleted list, front-loaded with main purpose. No extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, behavior, and key parameters. With output schema present, return values are sufficiently hinted. No gaps for this simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description mentions 'specified period' vaguely but does not detail parameters beyond schema. Schema descriptions exist for period and response_format, but description adds no extra semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it gets aggregate click analytics across all smart links, listing specific metrics. Distinct from sibling tools like get_smart_link_stats (per-link) and list_smart_links.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for account-wide stats over a period, and explicitly says it does not modify data. Does not explicitly contrast with alternatives, but context hints at differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_get_migration_statusARead-onlyIdempotentInspect
Get the status and progress of a specific migration.
Returns detailed status including videos processed, links created,
and any errors. Does NOT modify any data.
Common errors:
- Migration not found: check the ID or use `youfiliate_list_migrations`.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explicitly states 'Does NOT modify any data,' which aligns with annotations (readOnlyHint=true, destructiveHint=false). It also details what the return includes (videos processed, links created, errors), adding value beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded: the first sentence states the purpose, followed by relevant details and common errors. Every sentence is valuable and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple parameter (one UUID), presence of output schema, and annotations covering safety, the description provides sufficient context. It mentions the key return fields and error handling, making it complete for an agent to use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one required parameter 'id' with a UUID pattern description. The tool description does not elaborate on the parameter beyond what the schema provides. Since schema description coverage is 0% (the description does not mention parameter semantics), and the parameter is straightforward, a score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the status and progress of a specific migration.' The verb 'Get' and resource 'migration status' are specific, and it distinguishes from sibling tools like 'youfiliate_list_migrations' or 'youfiliate_start_migration'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides guidance on when to use this tool (to get detailed status of a specific migration) and includes common error handling ('Migration not found: check the ID or use youfiliate_list_migrations'). It falls short of explicitly stating when not to use it, but it is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_get_preferencesARead-onlyIdempotentInspect
Get your current smart link preferences/defaults.
Returns default settings applied to newly created smart links. Does NOT create or modify any data.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and idempotentHint; the description adds that it returns default settings and does not modify data, consistent with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three short sentences, each adds value, front-loaded, and avoids unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema and annotations, the description covers purpose and behavior adequately. The optional parameter is minor; overall it's sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not mention the response_format parameter. Schema description coverage is 0%, and the description does not compensate for the parameter's meaning, though the parameter is simple.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets smart link preferences/defaults, with a specific verb and resource. It distinguishes from sibling tools like youfiliate_update_preferences.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use by stating it returns default settings for new smart links and explicitly says it does not modify data, providing clear guidance for safe use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_get_smart_linkARead-onlyIdempotentInspect
Get full details of a single smart link by ID.
Returns all configuration including geo rules, deep link config,
and click stats. Does NOT modify the link.
Common errors:
- Smart link not found: check the ID or use `youfiliate_list_smart_links`.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description explicitly states no modification, reinforcing annotations. Lists returned data types (geo rules, deep link config, click stats). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences plus bullet list, front-loaded with main action, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists and annotations cover safety, description covers essential aspects: action, returns, error scenarios. Complete for a simple getter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description does not reference any parameters, but schema provides clear descriptions for both parameters. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it gets full details of a single smart link by ID. Distinguishes from siblings by noting it does not modify and mentions list_smart_links as alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides error handling guidance (check ID or use list) but does not compare with other resource-specific tools like get_smart_link_stats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_get_smart_link_statsARead-onlyIdempotentInspect
Get click analytics for a specific smart link.
Returns click counts broken down by country, device, referrer,
and day for the specified period. Does NOT modify any data.
Common errors:
- Smart link not found: check the ID.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, destructiveHint. The description adds that it does not modify data and lists common errors, which is useful but does not significantly extend beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three short paragraphs front-loading the purpose. The common errors section is helpful but could be integrated into a single sentence. No unnecessary repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description still covers return values (click counts by dimension) and common errors. With a single required parameter and optional period/format, the coverage is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides descriptions for all parameters (id UUID, period enum, response_format). The tool description does not add additional parameter-level information, so it meets the baseline but adds no extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get click analytics for a specific smart link' and details the breakdown by country, device, referrer, and day. It distinguishes itself from siblings like get_smart_link which retrieves link details, not analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It only lists common errors but does not mention when to choose this over other tools like get_aggregate_stats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_get_youtube_statusARead-onlyIdempotentInspect
Check if your YouTube account is connected.
Returns connection status, channel name, and scope information.
Does NOT modify any data or initiate any connections.
Common errors:
- Not connected: use `youfiliate_connect_youtube` to connect.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, idempotent, and non-destructive. The description adds that it does not initiate connections and lists return fields, but does not disclose other behavioral details like rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three clear, front-loaded sentences covering purpose, behavior, and error handling with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (as indicated by context), the description adequately covers all necessary aspects: purpose, return data, non-modification, common error, and alternative tool. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not mention the 'response_format' parameter, relying solely on the schema. With low schema description coverage (0%), the description should compensate but does not, though the parameter is simple and the schema provides basic info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks YouTube connection status and returns specific data (channel name, scope). It distinguishes from sibling tools like youfiliate_connect_youtube.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says the tool does not modify data and provides an error scenario with a link to a specific alternative tool (youfiliate_connect_youtube) for when not connected.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_list_migrationsARead-onlyIdempotentInspect
List your YouTube description migrations with pagination.
Returns a paginated list of all migrations. Does NOT modify any data.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, idempotentHint, and destructiveHint. The description reinforces that it does not modify data, but adds no further behavioral insight (e.g., rate limits, response size). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the action and resource, and contains no extraneous information. Every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given an output schema exists (context indicates true) and annotations provide safety info, the description adequately covers the tool's purpose and behavior. It could mention pagination details but remains sufficient for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% per context, yet the description provides no explanation of parameters (limit, offset, response_format). The description only mentions pagination without detailing how to use pagination parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The tool name 'youfiliate_list_migrations' and description clearly state it lists YouTube description migrations with pagination. It distinguishes itself from sibling tools like 'get_migration_status' or 'start_migration' which handle individual migrations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states the tool lists migrations and does not modify data, implying safe, read-only usage. However, it lacks explicit guidance on when to use this versus alternative tools or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_list_smart_linksARead-onlyIdempotentInspect
List your smart links with optional filtering, search, and pagination.
Returns a paginated list of smart links. Use filters to narrow results.
Does NOT create or modify any links.
Args:
params: Filters include health_status, search (title/URL), ordering,
limit (1-100, default 20), and offset.
Common errors:
- No links found: you may not have created any links yet.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already state readOnlyHint=true and destructiveHint=false. The description adds that it does not create or modify links, which is consistent but not new. It also mentions common errors (no links found), providing some additional behavioral insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise: a single sentence for purpose, then a list of parameters and common errors. No wasted words, and the key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the annotations (readOnly, idempotent), output schema presence, and detailed parameter schema, the description is nearly complete. It explains pagination, filters, and errors. Could mention the response_format parameter's effect, but that is in schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides detailed descriptions for each parameter (e.g., limit, offset, search, ordering, health_status). The description lists the filter types and repeats the limit range, adding little beyond the schema. With high schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists smart links with filtering and pagination. The verb 'list' and resource 'smart links' are specific. It distinguishes from sibling tools like get_smart_link (single) and create/delete/update.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use: to list smart links with optional filters. It does not explicitly mention when not to use or compare to alternatives like get_smart_link (single) or other listing tools, but the purpose is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_preview_migrationARead-onlyIdempotentInspect
Preview a YouTube description migration without making changes.
Performs a dry-run analysis showing how many videos and links would be
affected. Does NOT modify any data or YouTube descriptions. Requires
a connected YouTube account.
Common errors:
- YouTube not connected: connect first with `youfiliate_connect_youtube`.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark readOnly and idempotent. Description reinforces non-modification and adds useful error context, providing value beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four concise sentences covering purpose, behavior, and error handling, with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers all essential aspects: purpose, safety, prerequisites, and common errors. Output schema exists, so return format is already documented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not mention any parameters. With schema description coverage at 0% per context, the description fails to compensate, though the schema itself contains descriptions for each parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a dry-run preview of YouTube description migration without making changes, distinguishing it from migration execution tools like youfiliate_start_migration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly notes that no modification occurs and requires a connected YouTube account. Mentions common error and prerequisite tool (youfiliate_connect_youtube). Does not explicitly exclude cases where it should not be used, but the purpose is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_rollback_migrationADestructiveInspect
Roll back a completed migration, restoring original YouTube descriptions.
IMPORTANT: This modifies YouTube video descriptions. Always confirm with
the user before executing. This reverts all video descriptions to their
pre-migration state.
The rollback runs asynchronously. Requires a connected YouTube account.
Common errors:
- Migration not found or not in a rollback-eligible state.
- YouTube not connected: reconnect first.
- confirm=False: must set confirm=True after user confirmation.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true and readOnlyHint=false. The description adds value by revealing that the rollback runs asynchronously and modifies YouTube video descriptions, which is beyond the annotations. There is no contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structured into a concise header, an important note, and a list of common errors. It is front-loaded with the core purpose. Each sentence adds value, though the paragraph on common errors could be slightly more compact.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the present output schema (not shown) and annotations, the description covers the essential behavioral aspects: destructive action, async execution, user confirmation requirement, and prerequisites. It is complete enough for an agent to correctly invoke the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite the context indicating 0% schema description coverage, the input schema actually includes descriptions for both id and confirm. However, the tool description does not add significant new parameter semantics beyond what the schema provides. It reiterates that confirm must be set to true after user confirmation, but does not explain id or other nuances.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Roll back a completed migration, restoring original YouTube descriptions.' It uses a specific verb ('roll back') and resource ('migration'), and distinguishes it from sibling tools like youfiliate_start_migration and youfiliate_preview_migration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly warns about the destructive nature ('Always confirm with the user before executing'), lists prerequisites ('Requires a connected YouTube account'), mentions asynchronous execution, and provides common errors including the need to set confirm=True. This gives clear when-to-use and when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_start_migrationADestructiveInspect
Start a YouTube description migration to convert links to smart links.
IMPORTANT: This modifies YouTube video descriptions. Always confirm with
the user before executing. Describe the scope (number of videos/links
affected from the preview) and ask for explicit confirmation.
The migration runs asynchronously. Use `youfiliate_get_migration_status`
to track progress.
Requires a connected YouTube account.
Common errors:
- YouTube not connected: connect first.
- Migration already in progress: wait for it to complete.
- confirm=False: must set confirm=True after user confirmation.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations mark destructiveHint=true, and the description adds behavioral details: 'This modifies YouTube video descriptions,' asynchronous execution, requirement for user confirmation, and common error scenarios. No contradictions with annotations; the description adds valuable context beyond the structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear initial sentence, important usage note, and bullet points for common errors. It is front-loaded but slightly verbose with repeated emphasis on confirmation. Could be tightened without losing meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive, asynchronous, multiple parameters), the description covers prerequisites, async tracking, error handling, and confirmation requirement. The output schema exists so return values need not be explained. It is complete for an agent to use the tool safely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (the tool description does not explain parameters). Although the input schema itself has descriptions, the tool description only partially compensates for the `confirm` parameter by noting it must be set to true after user confirmation. It fails to describe `auto_geo_rules` and `conversion_mode`, leaving gaps for a low-coverage scenario.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Start a YouTube description migration to convert links to smart links.' The verb 'start' and resource 'migration' are specific, and it distinguishes itself from sibling tools like `youfiliate_preview_migration` and `youfiliate_get_migration_status`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly instructs to always confirm with the user before executing, describes when to use `youfiliate_get_migration_status` to track progress, and lists common errors with solutions. It provides clear guidance on prerequisites (connected YouTube account) and when not to use (migration already in progress).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_update_preferencesAIdempotentInspect
Update your smart link preferences/defaults.
Changes apply to newly created links only — existing links are
not affected. Does NOT delete any data.
Common errors:
- Invalid redirect_type: must be '301' or '302'.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate non-read-only, non-destructive, and idempotent. The description adds value by stating 'Does NOT delete any data', clarifying the scope to new links, and listing common errors. This enhances transparency beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences, front-loading the purpose, and uses a bullet for common errors. Every sentence is useful and no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity and the presence of an output schema, the description covers the main aspects: purpose, scope, non-destructiveness, and common errors. It could mention idempotency or authentication, but the annotations already cover idempotency, so it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides descriptions for all parameters (e.g., default_redirect_type, default_geo_rules_enabled). The description only adds a note about the redirect_type validation, which is already captured by the schema's enum. Since schema coverage is high, the description's incremental contribution is minimal.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Update your smart link preferences/defaults', specifying the verb 'update' and the resource 'smart link preferences/defaults'. It distinguishes from sibling tools like 'get_preferences' (read-only) and 'create_smart_link' (individual link creation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains that changes apply only to newly created links, not existing ones, which is a key usage guideline. However, it does not explicitly compare to alternatives or state when not to use, so the guidance is clear but not exhaustive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youfiliate_update_smart_linkAIdempotentInspect
Update an existing smart link (partial update — only provided fields change).
You can update the destination URL, slug, title, redirect type,
geo rules, or deep link config. Geo rules are replaced entirely
(not merged). Does NOT delete the link.
Common errors:
- Smart link not found: check the ID.
- Slug already taken: choose a different slug.
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate idempotentHint=true and destructiveHint=false, and the description adds important behavioral details: partial update, geo rules are replaced (not merged), and confirms it does NOT delete. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each with a specific purpose: purpose, field list, behavioral note (geo rules replacement), and common errors. Front-loaded with the core action. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the moderate complexity, presence of an output schema, and annotations covering safety and idempotency, the description adds all needed context: partial update, fields, replacement behavior, and error handling. Complete for an update tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description enumerates key parameters (destination URL, slug, title, redirect type, geo rules, deep link config) beyond what the schema provides in property descriptions. This helps an agent understand the scope of updatable fields, despite low top-level schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Update an existing smart link (partial update — only provided fields change)', clearly identifying the verb (update), resource (smart link), and the partial update nature. This distinguishes it from create or delete operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Gives clear context for use: updating fields of an existing smart link. Notes that geo rules are replaced entirely and common errors are listed. However, it doesn't explicitly contrast with sibling tools like create or delete, but the description implies appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!