Reel25 — Video Analytics for TikTok, Instagram & YouTube
Server Details
Video analytics for TikTok, Instagram, and YouTube. Track, analyze, and discover content.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 26 of 26 tools scored.
Most tools have distinct purposes, but some overlap exists: 'list_accounts' and 'get_account_videos' could be confused with 'get_account' for account-related queries, and 'track_video' vs 'bulk_track_videos' are similar in function. Descriptions clarify the differences, but an agent might need to carefully choose between them.
Tool names follow a consistent snake_case pattern with clear verb_noun structures throughout, such as 'add_accounts_to_folder', 'analyze_video', and 'get_account_metrics'. There are no deviations in naming conventions, making the set predictable and readable.
With 26 tools, the count is borderline high for a video analytics server, as it may feel heavy and potentially overwhelming. While the tools cover various aspects like tracking, analysis, and management, it approaches the upper limit where simplification could improve usability.
The tool set provides comprehensive coverage for video analytics across TikTok, Instagram, and YouTube, including account and video tracking, AI analysis, folder management, performance metrics, and workspace operations. There are no obvious gaps; it supports full CRUD-like workflows and lifecycle management for the domain.
Available Tools
26 toolsadd_accounts_to_folderBInspect
Add accounts to a folder (max 50). Free.
| Name | Required | Description | Default |
|---|---|---|---|
| folder_id | Yes | Folder UUID | |
| account_ids | Yes | Account UUIDs to add |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the 'max 50' batch limit and 'Free' cost implication, which are useful behavioral traits. However, it doesn't address critical aspects like whether this operation is idempotent, what permissions are required, how duplicates are handled, or what happens if accounts are already in the folder.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (7 words) and front-loaded with the core purpose. Every word earns its place: 'Add accounts to a folder' establishes the action, '(max 50)' provides a critical constraint, and 'Free' adds cost context. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is insufficiently complete. While it mentions batch limits and cost, it lacks information about return values, error conditions, side effects, or how this operation relates to the broader folder/account management system represented by sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents both parameters thoroughly ('Folder UUID' and 'Account UUIDs to add'). The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Add accounts to a folder') and specifies the resource ('accounts', 'folder'). It distinguishes from siblings like 'add_videos_to_folder' by specifying accounts instead of videos, but doesn't explicitly differentiate from other account-related tools like 'track_account' or 'remove_account'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the 'max 50' limitation and 'Free' indication, suggesting this is for batch operations within constraints. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'track_account' or 'remove_account', nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_videos_to_folderBInspect
Add videos to a folder (max 50). Free.
| Name | Required | Description | Default |
|---|---|---|---|
| folder_id | Yes | Folder UUID | |
| video_ids | Yes | Video UUIDs to add |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'max 50' as a constraint and 'Free' as a cost hint, but it lacks details on permissions needed, whether the operation is reversible, error handling, or rate limits. This is inadequate for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short phrases, front-loading the core action and adding key constraints efficiently. Every word earns its place, with no wasted information, making it highly readable and to the point.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a mutation tool with no annotations and no output schema, the description is insufficient. It lacks details on behavioral traits, error cases, return values, and how it differs from siblings. The 'Free' hint is minimal and doesn't compensate for the gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents both parameters ('folder_id' as Folder UUID and 'video_ids' as Video UUIDs). The description adds no additional meaning beyond what the schema provides, such as format details or constraints beyond 'max 50', which is implied but not explicitly tied to parameters. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'add' and the resources 'videos' and 'folder', making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'bulk_track_videos' or 'track_video', which might have overlapping functionality, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning 'max 50', suggesting a constraint on when to use it, but it doesn't provide explicit guidance on when to choose this tool over alternatives like 'bulk_track_videos' or 'track_video'. No exclusions or clear alternatives are stated, leaving usage context somewhat vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
analyze_videoAInspect
AI video analysis: hooks, structure, CTA, UGC. Costs 5 credits. Polls internally (15-30s).
| Name | Required | Description | Default |
|---|---|---|---|
| force | No | Force re-analysis (default false) | |
| video_id | Yes | Video UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and adds valuable behavioral context: it discloses the cost (5 credits), polling mechanism (15-30s internal polling), and the types of analysis performed (hooks, structure, CTA, UGC). This goes beyond what the input schema provides about parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (just 11 words) and front-loaded with the core purpose. Every element (analysis types, costs, polling behavior) earns its place with zero wasted words, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides good behavioral context but leaves gaps: it doesn't explain the format of analysis results, error conditions, or how the polling mechanism works in practice. The cost and timing information is helpful but incomplete for full agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3 for adequate schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'AI video analysis' with specific aspects analyzed (hooks, structure, CTA, UGC), which is a specific verb+resource combination. However, it doesn't distinguish this analysis tool from sibling tools like 'get_analysis' or 'radar_history', which might provide different types of analysis results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning costs (5 credits) and polling behavior (15-30s), suggesting this is for detailed analysis rather than quick status checks. However, it doesn't explicitly state when to use this versus alternatives like 'get_analysis' or provide clear exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bulk_track_videosCInspect
Track up to 25 videos at once by URL. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| urls | Yes | Array of video URLs (max 25) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions 'Free,' which hints at no cost, but fails to disclose critical behavioral traits such as what 'track' entails (e.g., monitoring, logging, or analysis), whether it's idempotent, rate limits, or authentication needs. The description is too vague for a mutation-like operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short phrases: 'Track up to 25 videos at once by URL. Free.' It is front-loaded with the core purpose and wastes no words, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete for a tool that likely performs a mutation (tracking). It lacks details on behavior, return values, error handling, or how it differs from siblings. The mention of 'Free' is insufficient to compensate for these gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'urls' parameter documented as 'Array of video URLs (max 25).' The description adds minimal value by restating 'Track up to 25 videos at once by URL,' which echoes the schema. Baseline is 3 since the schema does the heavy lifting, but no additional semantics are provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Track up to 25 videos at once by URL.' It specifies the action (track), resource (videos), and scope (bulk, up to 25). However, it does not explicitly differentiate from sibling tools like 'track_video' or 'track_account', which is needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Free' but does not clarify if this is a cost-related distinction or when to choose bulk tracking over individual tracking (e.g., 'track_video'). No exclusions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_folderCInspect
Create a folder. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Folder name (max 255 chars) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Create a folder' which implies a write operation, but lacks details on permissions required, whether it's idempotent, error handling, or what happens on success (e.g., returns a folder ID). The mention 'Free' is vague and doesn't clarify if it refers to cost, rate limits, or other constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short sentences ('Create a folder. Free.'), front-loading the core purpose. Every word contributes meaning, with no wasted text, making it efficient for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a write operation with no annotations or output schema), the description is incomplete. It doesn't explain what the tool returns (e.g., a folder object or ID), error conditions, or behavioral nuances like whether duplicate names are allowed. For a creation tool, this leaves significant gaps in understanding how to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'name' parameter documented as 'Folder name (max 255 chars)'. The description adds no additional parameter semantics beyond what's in the schema, such as naming conventions or validation rules, so it meets the baseline for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create') and resource ('a folder'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'list_folders' or 'add_accounts_to_folder', which would require mentioning this is specifically for creation rather than listing or modifying existing folders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a workspace), exclusions, or compare it to sibling tools like 'list_folders' for retrieval or 'add_accounts_to_folder' for modification, leaving the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
folder_analyticsCInspect
Get folder-level analytics. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| folder_id | Yes | Folder UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It only mentions 'Free' (possibly hinting at no cost), but fails to describe critical aspects like what analytics are returned, format, permissions needed, rate limits, or whether it's a read-only operation. This leaves significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise with two short phrases, making it easy to parse. However, the second phrase 'Free' is ambiguous and doesn't clearly add value, slightly reducing efficiency. Overall, it's front-loaded with the core purpose but could be more polished.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description is incomplete. It doesn't explain what analytics are returned, their format, or any behavioral traits beyond a vague 'Free' hint. Given the complexity of analytics tools and lack of structured data, this leaves the agent with insufficient information to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'folder_id' documented as 'Folder UUID'. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or constraints. Given high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('folder-level analytics'), making the purpose immediately understandable. However, it doesn't differentiate this tool from potential sibling analytics tools like 'get_account_metrics' or 'growth_trends', which might also provide analytics but at different scopes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_account_metrics' and 'growth_trends' that might offer related analytics, there's no indication of context, prerequisites, or exclusions for choosing this folder-specific analytics tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_accountCInspect
Get account details. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| account_id | Yes | Account UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions 'Free', which might imply no cost, but doesn't disclose critical behavioral traits like read/write nature, authentication needs, rate limits, or error handling. This is inadequate for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core purpose, but 'Free' is extraneous and doesn't add value. It's efficient but could be more focused by omitting irrelevant details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what 'account details' include, the return format, or any behavioral context, making it insufficient for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the single parameter 'account_id'. The description adds no parameter-specific information beyond what's in the schema, resulting in a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool's purpose ('Get account details') which is clear but vague—it doesn't specify what details are retrieved or differentiate it from sibling tools like 'get_account_metrics' or 'get_account_videos'. It adds 'Free' which is irrelevant to purpose clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as 'list_accounts' or 'get_account_metrics'. The description lacks context, prerequisites, or exclusions, leaving the agent without usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_account_metricsCInspect
Get detailed account performance metrics. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| account_id | Yes | Account UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It mentions 'Free' which hints at no cost, but doesn't cover important aspects like authentication requirements, rate limits, response format, or whether this is a read-only operation. For a metrics tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two words and a period, front-loading the core purpose immediately. Every element ('Get detailed account performance metrics. Free.') serves a clear purpose without any wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a metrics retrieval tool with no annotations and no output schema, the description is incomplete. It doesn't explain what 'performance metrics' includes, the format of returned data, or any behavioral constraints. The mention of 'Free' adds some context but doesn't compensate for the missing operational details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the single parameter 'account_id' well-documented as 'Account UUID'. The description doesn't add any parameter-specific information beyond what the schema provides, which is acceptable given the comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('detailed account performance metrics'), making it immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_account' or 'list_accounts', which might provide different types of account information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_account' or 'list_accounts'. The mention 'Free' might imply cost considerations but doesn't clarify functional differences or usage contexts compared to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_account_videosCInspect
List videos for a specific account. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Results per page (default 20) | |
| account_id | Yes | Account UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions 'Free', which hints at no cost, but lacks details on permissions, rate limits, pagination, or what 'List' entails (e.g., format, sorting). For a read operation with no annotation coverage, this is insufficient behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short phrases, front-loading the core purpose ('List videos for a specific account') and adding a brief note ('Free'). Every word earns its place without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., video list format, metadata), behavioral aspects like error handling, or how it differs from similar tools. For a tool with two parameters and read functionality, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the parameters (account_id as UUID, limit with default). The description adds no additional parameter semantics beyond implying account-specific filtering, which is already clear from the schema. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('videos for a specific account'), making the purpose immediately understandable. However, it doesn't distinguish this tool from sibling tools like 'list_videos' or 'get_video', which could serve similar purposes but with different scopes or parameters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as 'list_videos' or 'get_video'. It mentions 'Free', which might imply cost considerations, but doesn't clarify if this is a distinguishing factor from paid tools or other free tools in the set.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_analysisCInspect
Get existing AI analysis for a video. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| video_id | Yes | Video UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Free', hinting at no cost, but fails to describe other critical traits such as whether this is a read-only operation, any rate limits, authentication needs, or what the output format looks like (e.g., JSON structure, error handling). This leaves significant gaps for an AI agent to understand how to invoke it safely and effectively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short phrases ('Get existing AI analysis for a video. Free.'), front-loaded with the core purpose and no wasted words. Every sentence earns its place by stating the action and a key behavioral hint, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of retrieving AI analysis (which could involve data formats, error cases, or dependencies), the description is incomplete. No annotations or output schema are provided, and the description lacks details on return values, error handling, or prerequisites. While the purpose is clear, the tool's full context for safe and correct usage is not adequately covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with 'video_id' documented as 'Video UUID'. The description does not add any meaning beyond this, as it does not explain the parameter's role or constraints further. With high schema coverage, the baseline score of 3 is appropriate, as the schema adequately handles parameter documentation without extra value from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'existing AI analysis for a video', making the purpose specific and understandable. However, it does not explicitly differentiate from sibling tools like 'analyze_video' or 'get_video', which could provide similar or related functionality, leaving room for ambiguity in tool selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance with 'Free', which might imply cost considerations but does not specify when to use this tool versus alternatives like 'analyze_video' (which might generate new analysis) or 'get_video' (which might retrieve video metadata). No explicit when/when-not or alternative tools are mentioned, leading to potential misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_creditsBInspect
Check workspace credit balance. Free.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions 'Free' which hints at no cost, but doesn't disclose other behavioral traits like authentication needs, rate limits, response format, or whether this is a read-only operation. For a tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at just 4 words ('Check workspace credit balance. Free.'), front-loaded with the core purpose, and has zero wasted words. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no annotations, no output schema), the description is minimally adequate. However, it lacks context about what 'credits' represent, how the balance is returned, or any prerequisites. For a tool that likely returns important financial/usage data, more completeness would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description doesn't need to add parameter information, and it appropriately doesn't mention any parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check workspace credit balance' specifies the verb ('Check') and resource ('workspace credit balance'). However, it doesn't differentiate from sibling tools, as none appear to be credit-related alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives is provided. The description mentions 'Free' but doesn't clarify if this is a distinguishing feature from paid tools or when credit checking is appropriate in the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_radar_resultsCInspect
Get results of a previous radar search. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| search_id | Yes | Search UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions 'Free' which hints at no cost, but doesn't cover critical aspects like whether this is a read-only operation, response format, pagination, error handling, or rate limits. For a tool with zero annotation coverage, this is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and to the point with two short phrases. However, 'Free' feels tacked on without clear relevance to tool selection, slightly reducing efficiency. Overall, it's well-structured but could be more focused.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a simple input schema, the description is incomplete. It lacks details on what 'results' entail, how they're returned, or any behavioral traits needed for effective use. The mention of 'Free' doesn't compensate for these gaps in a tool that retrieves data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'search_id' documented as 'Search UUID'. The description adds no additional parameter context beyond what the schema provides, such as how to obtain the search_id or format specifics. Baseline 3 is appropriate since the schema handles documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get results') and resource ('previous radar search'), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'radar_history' or 'search_content', which appear related to radar/search functionality, so it doesn't achieve full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Free' which might imply cost considerations, but doesn't specify prerequisites (e.g., needing a search_id from a previous operation) or contrast with tools like 'radar_history' or 'search_content' that handle similar data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_videoCInspect
Get video details including metrics and music info. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| video_id | Yes | Video UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Free.' which hints at cost-free usage, but lacks critical details: whether this is a read-only operation, authentication requirements, rate limits, error handling, or what 'details' specifically include. For a tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core purpose in the first clause. 'Free.' adds supplementary information efficiently. However, the second part ('including metrics and music info') could be integrated more smoothly, and overall it lacks structural elements like bullet points or explicit sections, though this is acceptable given its length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (retrieving video details with metrics and music info), no annotations, and no output schema, the description is incomplete. It doesn't explain the return format, what 'metrics' or 'music info' entail, or potential limitations. For a tool with rich expected output but no structured documentation, more descriptive context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description doesn't explicitly mention parameters, but the input schema has 100% coverage with one required parameter ('video_id'). The schema's description ('Video UUID') provides adequate documentation. Since schema coverage is high, the baseline score is 3, as the description adds no parameter-specific value beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get video details including metrics and music info.' This specifies the verb ('Get'), resource ('video details'), and scope of information returned. However, it doesn't explicitly differentiate this from sibling tools like 'get_video_history' or 'get_analysis', which might also retrieve video-related information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance with 'Free.' which might imply cost considerations, but offers no explicit when-to-use instructions. It doesn't mention when to choose this tool over alternatives like 'get_video_history' or 'analyze_video', nor does it specify prerequisites or exclusions. The guidance is insufficient for informed tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_video_historyCInspect
Get video performance snapshots over time. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Days of history (1-365, default 30) | |
| video_id | Yes | Video UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Free' which hints at no cost, but doesn't address critical aspects like rate limits, authentication requirements, data freshness, pagination, error conditions, or what 'performance snapshots' specifically include. For a tool with no annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two brief sentences. 'Get video performance snapshots over time' efficiently communicates the core purpose, and 'Free' adds a useful constraint without unnecessary elaboration. Every word earns its place with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a tool that retrieves historical performance data, the description is insufficiently complete. It doesn't explain what 'performance snapshots' include (views, engagement, revenue?), the format of returned data, temporal granularity, or limitations. For a historical data retrieval tool with zero structured metadata, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with both parameters ('video_id' and 'days') clearly documented in the schema. The description doesn't add any meaningful parameter semantics beyond what the schema already provides (e.g., it doesn't explain what 'performance snapshots' consist of or how the days parameter affects the output). Baseline 3 is appropriate when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get video performance snapshots over time' specifies both the verb ('Get') and resource ('video performance snapshots'), with the temporal aspect ('over time') providing additional context. However, it doesn't explicitly differentiate this tool from potential siblings like 'get_video' or 'radar_history' that might also retrieve video-related data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While it mentions 'Free' (possibly indicating no cost), it doesn't specify use cases, prerequisites, or comparisons to sibling tools like 'get_video' (which might retrieve current state) or 'radar_history' (which might provide different historical data).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_workspaceCInspect
Get workspace info: plan, credits, limits. Free.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is 'Free', which might imply no cost, but doesn't clarify authentication needs, rate limits, or what 'workspace info' entails beyond the listed fields. For a tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but could be more structured. 'Get workspace info: plan, credits, limits. Free.' is front-loaded with the core purpose, but the second part ('Free') is ambiguous and doesn't add clear value. It's concise but not optimally structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It lists data fields (plan, credits, limits) but doesn't explain the return format, error conditions, or what 'Free' means in context. For a tool with zero structured metadata, this leaves too many gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so there are no parameters to document. The description doesn't need to add parameter semantics, and it appropriately avoids discussing non-existent inputs. A baseline of 4 is applied as it meets expectations for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('workspace info'), and lists the specific data returned (plan, credits, limits). It distinguishes itself from siblings like get_account or get_credits by focusing on workspace-level information. However, it doesn't explicitly differentiate from all siblings, keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_account or get_credits, nor does it mention any prerequisites or context for usage. The single word 'Free' is ambiguous—it could refer to cost, but without clarification, it doesn't serve as effective usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
growth_trendsCInspect
Get follower and video growth trends. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Time period (default 30d) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions 'Free' which hints at no cost, but doesn't disclose rate limits, authentication needs, data freshness, or what 'trends' specifically include (e.g., charts, percentages). For a tool with potential data implications, this leaves behavioral aspects largely opaque.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (two short phrases) and front-loaded with the core purpose. Every word earns its place, though 'Free' could be integrated more smoothly. It avoids redundancy and gets straight to the point without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what 'growth trends' returns (e.g., time-series data, metrics), nor does it cover error conditions or usage constraints. For a tool that likely returns complex analytics data, this leaves too much unspecified for reliable agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the single parameter 'period' with enum values and default. The description adds no parameter-specific information beyond implying time-based trends. This meets the baseline for high schema coverage, but doesn't enhance understanding of parameter usage or effects.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get') and resources ('follower and video growth trends'), making it immediately understandable. It distinguishes itself from siblings by focusing on growth trends rather than account/video details or management tasks. However, it doesn't explicitly differentiate from potential trend-related siblings (none exist in the list).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Free' which might imply a cost consideration, but doesn't specify prerequisites, limitations, or recommend other tools for different types of data. With many sibling tools for analytics and metrics, this lack of contextual guidance is a significant gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_accountsBInspect
List all tracked creator accounts. Free.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the action ('List all tracked creator accounts') but lacks details on output format (e.g., pagination, sorting), permissions required, or rate limits. The term 'Free' hints at no cost but doesn't specify operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—two short phrases that directly state the tool's function and a cost hint. It's front-loaded with the core purpose and wastes no words, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on behavior (e.g., output structure) that would help an agent use it effectively, especially without annotations to fill gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, earning a high baseline score for not adding unnecessary information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('tracked creator accounts'), with the qualifier 'all' indicating comprehensive scope. However, it doesn't explicitly differentiate from sibling tools like 'get_account' or 'search_content', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_account' (for single accounts) or 'search_content' (for filtered searches). The mention 'Free' is ambiguous—it could imply cost-free usage but doesn't clarify functional context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_foldersBInspect
List all folders. Free.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. 'List all folders' implies a read-only operation, but doesn't specify pagination behavior, rate limits, permission requirements, or what 'all' means in practice. The 'Free' mention hints at cost but doesn't clarify if this refers to API cost, service tier, or something else.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at just three words. It's front-loaded with the core functionality ('List all folders') and uses minimal additional text. Every word serves a purpose, with no wasted sentences or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete for a listing tool. It doesn't explain what information is returned about folders, whether results are paginated, what 'all' encompasses, or any limitations. The 'Free' mention adds some context but doesn't compensate for the missing behavioral and output information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters with 100% schema description coverage, so the schema already fully documents the parameter situation. The description doesn't need to add parameter information, and the baseline for zero parameters is 4. The description correctly doesn't attempt to document nonexistent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('folders'), making the purpose immediately understandable. It distinguishes from siblings like 'create_folder' by focusing on retrieval rather than creation. However, it doesn't specify scope or filtering options that might differentiate it from other listing tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'search_content' and 'list_videos' available, there's no indication of when listing all folders is appropriate versus using more specific tools. The 'Free' mention might imply cost considerations but doesn't provide practical usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_videosCInspect
List all tracked videos. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Results per page (1-100, default 20) | |
| offset | No | Pagination offset | |
| end_date | No | Filter: posted before (YYYY-MM-DD) | |
| platform | No | Filter by platform | |
| start_date | No | Filter: posted after (YYYY-MM-DD) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Free', hinting at no cost, but doesn't cover critical aspects like pagination behavior (implied by limit/offset), rate limits, authentication needs, or what 'tracked videos' entails. This leaves significant gaps for a tool with filtering parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two words and a period, front-loading the core purpose. Every element ('List all tracked videos. Free.') serves a purpose without waste, making it efficient for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, filtering capabilities) and lack of annotations or output schema, the description is insufficient. It doesn't explain return values, error conditions, or how filtering interacts with pagination, leaving the agent with incomplete context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so parameters are well-documented there. The description adds no additional parameter semantics beyond implying a list operation, which is already clear from the tool name. This meets the baseline for high schema coverage but doesn't enhance understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('tracked videos'), making the purpose evident. However, it doesn't differentiate from siblings like 'list_accounts' or 'get_account_videos', which also list resources. The mention of 'Free' adds context but doesn't fully distinguish it from similar tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'search_content' or 'get_account_videos'. The description lacks context about use cases, prerequisites, or exclusions, leaving the agent to infer usage from the tool name and parameters alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
radar_historyCInspect
List past radar searches. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results (default 20) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'List past radar searches' which implies a read-only operation, but doesn't clarify permissions, rate limits, pagination, or what 'past' means temporally. 'Free' hints at no cost but lacks detail on constraints or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with only two words and a period, front-loading the core purpose. Every element ('List past radar searches. Free.') earns its place without waste, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete for a tool that likely returns historical data. It doesn't explain what 'radar searches' are, the format of results, or how 'past' is defined. For a tool with potential complexity in data retrieval, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single parameter 'limit', so the schema already documents it fully. The description adds no additional parameter information beyond what's in the schema, which is acceptable given the high coverage, resulting in the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'past radar searches', making the purpose specific and understandable. However, it doesn't explicitly distinguish this from sibling tools like 'get_radar_results' or 'search_content', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Free' but doesn't explain if this implies cost considerations or limitations compared to other tools. No explicit when/when-not or alternative recommendations are included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
remove_accountCInspect
Stop tracking an account. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| account_id | Yes | Account UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'Stop tracking an account' which implies a mutation, but lacks details on permissions needed, whether the action is reversible, data retention policies, or error handling. 'Free' adds minimal context but is vague.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short phrases ('Stop tracking an account. Free.'), front-loading the core action. Every word contributes meaning without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain the outcome (e.g., what 'stop tracking' entails), potential side effects, or response format, leaving significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'account_id' documented as 'Account UUID'. The description doesn't add any parameter-specific details beyond what the schema provides, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Stop tracking') and resource ('an account'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'untrack_video' or 'remove_account' (if it existed), which might perform similar untracking operations on different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'untrack_video' or what happens after removal (e.g., if data is deleted or just hidden). It mentions 'Free' but doesn't clarify if this implies no cost or other usage conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_contentAInspect
AI-powered content discovery. Costs 100 credits. Polls internally (30-60s).
| Name | Required | Description | Default |
|---|---|---|---|
| depth | No | Research depth (default: standard) | |
| query | Yes | Describe the content you want to find. Be specific. | |
| minViews | No | Minimum views filter | |
| platform | No | Target platform (default: tiktok) | |
| timeframe | No | Time period filter |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and adds valuable behavioral context: it discloses the cost (100 credits) and polling behavior (30-60s internally). This goes beyond what the input schema provides and helps the agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences that each provide critical information: the core purpose and two key operational constraints. Every word earns its place with zero wasted text, making it highly efficient for agent comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 5-parameter tool with no annotations and no output schema, the description provides good behavioral transparency but lacks information about return values, error conditions, or authentication requirements. The purpose is clear but operational context is incomplete given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3 for adequate coverage through structured data alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'AI-powered content discovery' which clearly indicates the tool's function. It specifies the verb 'discovery' and resource 'content', though it doesn't explicitly differentiate from sibling tools like 'search_videos' or 'get_radar_results' which might have overlapping purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for content discovery with AI assistance, but provides no explicit guidance on when to use this tool versus alternatives like 'get_video', 'list_videos', or 'radar_history'. The cost and polling time information suggests it's for intensive searches, but no clear when/when-not rules are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
top_videosCInspect
Get top performing tracked videos. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results (1-100, default 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Free', which hints at no cost, but doesn't cover other important aspects like whether this is a read-only operation, what 'top performing' means (e.g., based on views, engagement), rate limits, authentication needs, or the return format. The description is too sparse for a tool with behavioral implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two words and a period, making it front-loaded and efficient. Every word ('Get', 'top performing tracked videos', 'Free') contributes directly to understanding the tool's purpose and a key constraint, with no wasted sentences or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of retrieving 'top performing' videos (which implies ranking criteria) and the lack of annotations and output schema, the description is incomplete. It doesn't explain what 'top performing' means, how results are sorted, what data is returned, or any limitations beyond 'Free'. For a tool with potential behavioral nuances, this leaves significant gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'limit' parameter fully documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, such as explaining how 'limit' interacts with 'top performing' criteria. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but doesn't detract either.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'top performing tracked videos', making the purpose immediately understandable. It distinguishes this tool from siblings like 'get_video' or 'list_videos' by specifying 'top performing' and 'tracked' videos, though it doesn't explicitly contrast with all similar tools like 'get_account_videos' or 'search_content'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Free', which might imply cost considerations, but doesn't clarify when to choose this over tools like 'get_account_videos', 'list_videos', or 'search_content' for retrieving video data. There are no explicit when/when-not statements or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
track_accountCInspect
Track a creator account by URL. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Account URL (e.g. https://www.tiktok.com/@username) | |
| video_limit | No | Max videos to track (1-500, default 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states 'Track' (implying a write/mutation operation) and 'Free' (hinting at no cost), but lacks critical details: required permissions, rate limits, whether tracking is reversible (see 'untrack_video' sibling), what 'tracking' entails (e.g., ongoing monitoring vs. one-time addition), or response format. This is inadequate for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—two short phrases with zero wasted words. It's front-loaded with the core purpose ('Track a creator account by URL') and adds a useful cost hint ('Free'). Every sentence earns its place by providing essential information efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a mutation operation with 2 parameters), lack of annotations, and no output schema, the description is incomplete. It misses behavioral context (e.g., what 'tracking' does, success/failure responses), usage prerequisites, and differentiation from siblings. While concise, it fails to compensate for the absence of structured data, leaving the agent with significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters ('url' and 'video_limit') well-documented in the schema. The description adds no parameter-specific information beyond implying URL usage. Since the schema handles the heavy lifting, the baseline score of 3 is appropriate—the description doesn't enhance parameter understanding but doesn't detract either.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Track') and resource ('a creator account by URL'), specifying the verb and target. It distinguishes from siblings like 'get_account' (which likely retrieves existing tracked accounts) by implying this initiates tracking. However, it doesn't explicitly contrast with 'add_accounts_to_folder' or 'remove_account', leaving some ambiguity about sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., whether the account must be public or previously untracked), exclusions (e.g., not for folders or videos), or direct comparisons to siblings like 'track_video' or 'bulk_track_videos'. The 'Free' hint might imply cost considerations but lacks context on limitations or when paid options apply.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
track_videoBInspect
Track a video by URL. Supports TikTok, Instagram, YouTube. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Full video URL (https) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions 'Free' which hints at cost/rate limits, but doesn't disclose what 'tracking' entails (e.g., data collection, monitoring frequency, storage implications), authentication needs, or error handling. This leaves significant behavioral gaps for a tool that likely involves ongoing data collection.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (two short sentences) with zero wasted words. It front-loads the core purpose and efficiently adds supporting details (platforms, cost). Every sentence earns its place by providing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete for a tool that likely performs ongoing tracking. It lacks details on what tracking returns (e.g., metrics, updates), how results are accessed, duration of tracking, or success/error responses. The 'Free' hint is insufficient for the complexity implied by tracking functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with only one parameter ('url'), so the baseline is 3. The description adds value by specifying the supported platforms (TikTok, Instagram, YouTube) and the 'Free' aspect, which provides context beyond the schema's technical requirement for a 'Full video URL (https)'. This compensates adequately for the single parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Track a video') and the resource ('by URL'), specifying the supported platforms (TikTok, Instagram, YouTube). It distinguishes from siblings like 'track_account' and 'bulk_track_videos' by focusing on single video tracking, but doesn't explicitly differentiate from 'untrack_video' or 'get_video'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for tracking videos from specific platforms, but provides no explicit guidance on when to use this versus alternatives like 'bulk_track_videos' for multiple videos or 'track_account' for account-level tracking. The mention of 'Free' suggests a cost context but doesn't clarify limitations or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
untrack_videoCInspect
Stop tracking a video. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| video_id | Yes | Video UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action ('Stop tracking') and adds 'Free', which might hint at no monetary cost but doesn't address critical aspects like permissions needed, whether the action is reversible, side effects, or rate limits. This leaves significant gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short sentences that are front-loaded and waste no words. Every part ('Stop tracking a video. Free.') contributes directly to understanding the tool's purpose and a potential behavioral trait, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is insufficient. It lacks details on what 'tracking' means, the consequences of untracking, error conditions, or return values. The context signals indicate a simple parameter structure, but the description doesn't compensate for the missing behavioral and output information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the parameter 'video_id' documented as 'Video UUID'. The description adds no additional semantic context about this parameter, such as format examples or where to find the ID. Given the high schema coverage, the baseline score of 3 is appropriate as the schema handles the documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Stop tracking') and resource ('a video'), making the purpose immediately understandable. It distinguishes itself from sibling tools like 'track_video' by indicating the opposite operation. However, it doesn't specify what 'tracking' entails in this context, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., the video must already be tracked), exclusions, or related tools like 'track_video' or 'bulk_track_videos'. The single word 'Free' might imply no cost but doesn't clarify usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!