KonbiniAPI MCP
Server Details
Get social media data from Instagram and TikTok: profiles, posts, videos, comments, and more.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 30 of 30 tools scored.
Each tool targets a distinct resource (Instagram or TikTok) with a clear action (get, search) and entity (user, post, video, etc.). No two tools have overlapping purposes; even similar-sounding tools like tiktok_search_content and tiktok_search_videos are differentiated by result type.
Tools follow a consistent pattern: platform_verb_noun (e.g., instagram_get_user, tiktok_search_videos). The only minor inconsistency is instagram_get_highlight_stories and instagram_get_user_highlights, where the order of 'highlight' and 'stories' varies slightly, but overall the pattern is strong.
30 tools is appropriate for a server covering two major social media platforms (Instagram and TikTok). Each platform has a comprehensive set of operations (profile, posts, comments, searches, etc.), and no tools feel redundant or unnecessary.
The tool surface covers most common use cases for both platforms: user profiles, posts/videos, comments, searches, and platform-specific features (highlights, reels, live). Minor gaps include lack of direct Instagram post creation (though likely out of scope) and absence of TikTok audio search; however, core functionality is well-covered.
Available Tools
30 toolsinstagram_get_highlight_storiesInstagram: Get highlight storiesAInspect
Returns all story items within a highlight reel, including images and videos. The highlight ID comes from the highlights list endpoint. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| highlightId | Yes | Instagram highlight ID | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations include readOnlyHint=false (write) but destructiveHint=false, so the tool is safe. The description adds no behavioral traits beyond the schema, but the schema covers parameter details. No contradiction found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences providing essential information: what it returns, dependency on another endpoint, and optional field parameters. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and 4 parameters, the description is mostly complete. It explains the source of highlight ID and optional field usage. However, it could mention error cases or authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already explains parameters well. The description adds that fields are for 'payload reduction,' but this is partially redundant with the schema descriptions. Overall, minimal extra value, but baseline is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves story items (images and videos) from a highlight reel. It distinguishes itself from siblings like 'instagram_get_user_highlights' by focusing on content retrieval rather than listing highlights. However, it does not explicitly differentiate from other story-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the highlight ID comes from the highlights list endpoint, providing usage context. However, it does not specify when to use this tool over alternatives, nor does it mention conditions like privacy or availability of highlights.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
instagram_get_location_postsInstagram: Get posts by locationAInspect
Returns recent posts tagged at a location. Fixed page size of 21 (platform limit). The location ID is a numeric Facebook Places ID. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Page size (fixed at 21 by the platform) Default: 21 | |
| cursor | No | Pagination cursor | |
| locationId | Yes | Instagram/Facebook location ID (numeric) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the fixed page size (21) and that it returns 'recent' posts, which beyond annotations (readOnlyHint=false, destructiveHint=false) adds meaningful behavioral context. It does not mention rate limits or other constraints, but the disclosed details are valuable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each serving a clear purpose: stating functionality and noting key constraints. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the moderate complexity (6 params, 1 required) and no output schema, the description covers the core purpose and key behavioral constraints (fixed page size, location ID format, payload reduction options). It could mention return format or pagination behavior, but the provided details are sufficient for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds context that locationId is 'numeric Facebook Places ID' and mentions projection_preset for payload reduction, but these are partially redundant with schema descriptions. No additional semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns recent posts tagged at a location, distinguishing it from sibling tools like instagram_get_user_posts (posts by a user) and tiktok_get_tag_videos (different platform). The specific verb 'returns' and resource 'posts by location' leave no ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool (getting posts by location) and notes the fixed page size and location ID format. However, it does not explicitly exclude using sibling tools like instagram_get_user_posts for similar tasks, though the name and description sufficiently differentiate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
instagram_get_postInstagram: Get post detailsAInspect
Returns details for a single post by its shortcode, including media, captions, and engagement counts. Supports photos, videos, and carousels. Supports projection_preset and data_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| postId | Yes | Post shortcode (from instagram.com/p/{shortcode}/) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate non-read-only and non-destructive. Description adds that it supports projection_preset and data_fields for payload reduction, which is useful but expected from parameter descriptions. No behavioral surprises.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences covering purpose, supported types, and payload reduction options. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description omits return structure but mentions 'media, captions, and engagement counts', which is sufficient for a list-like retrieval. Complete enough for a simple GET endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers 100% of parameters with descriptions. The description mentions shortcode format implicitly via 'Post shortcode (from instagram.com/p/{shortcode}/)' in schema. No additional meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns details for a single post by shortcode, and lists supported types (photos, videos, carousels). It differentiates from siblings like instagram_get_user_posts by focusing on a single post retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (for a single post) but does not explicitly contrast with alternatives like instagram_search_media or instagram_get_post_comments for broader search or comments.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
instagram_get_post_commentsInstagram: Get post commentsAInspect
Returns top-level comments on an Instagram post. Fixed page size of 15 (platform limit). Includes comment text, author info, like counts, and timestamps. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Page size (fixed at 15 by the platform) Default: 15 | |
| cursor | No | Pagination cursor | |
| postId | Yes | Post shortcode (from instagram.com/p/{shortcode}/) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false (likely write, but description does not mention write behavior) and destructiveHint=false. Description adds value by noting fixed page size (platform limit) and included fields, which annotations do not cover. There is no contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is three sentences, each adding distinct information: purpose, limitations (fixed page size), and configurable options. Slightly verbose for including field details already in schema, but generally efficient. Front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains return includes comment text, author info, likes, timestamps. However, it does not mention pagination cursor usage or that only top-level comments are returned (no replies). Sibling tools like 'tiktok_get_comment_replies' imply replies are separate. Overall adequate but could note cursor parameter usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so baseline is 3. The description mentions 'projection_preset, data_fields, and item_fields' but adds no syntax or behavioral nuance beyond the schema. Semantics are adequately covered by schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns 'top-level comments on an Instagram post', specifying verb (returns), resource (top-level comments on Instagram post), and context (fixed page size, included fields). Sibling tools like 'tiktok_get_video_comments' or 'instagram_get_post' help distinguish, but the description itself is self-contained and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly mentions fixed page size (15) and supports pagination with cursor, implicitly guiding use for iterative fetching. However, no explicit alternatives or when-not-to-use guidance is provided, though the sibling set includes other comment-related tools (e.g., 'tiktok_get_comment_replies') and the tool's focus on top-level comments is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
instagram_get_userInstagram: Get user profileAInspect
Returns profile information for an Instagram user including bio, follower counts, profile picture, and account metadata. Look up any public Instagram account by username. Supports projection_preset and data_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| username | Yes | Instagram username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=false, but the description does not disclose that this is a read operation (since it returns profile info). However, it doesn't contradict annotations; it could be clearer about side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, but it could be more concise by removing redundant phrasing like 'public Instagram account' (implied by Instagram).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description could detail return format explicitly, but it lists key fields. It's adequate but not thorough for a social media profile tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds some value by mentioning projection_preset and data_fields for payload reduction, but this is already implied by the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns Instagram user profile information with specific fields like bio, follower counts, etc., and distinguishes from siblings that focus on posts, stories, or media.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides limited usage guidance: it mentions looking up public accounts by username but does not explicitly state when not to use it or alternative tools for non-public information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
instagram_get_user_highlightsInstagram: Get user story highlightsAInspect
Returns the list of story highlight reels on a user's profile. Use the highlight endpoint to get individual stories within a highlight. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| cursor | No | Pagination cursor | |
| username | Yes | Instagram username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Overall annotations are present but mostly default (false). The description adds behavior info about payload reduction via projection preset and fields. However, it does not disclose potential side effects or data handling beyond what annotations imply. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each adding value: purpose, sibling reference, and field usage guidance. No wasted words, though it could be slightly more concise by merging the second and third sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there is no output schema, the description mentions returning a list of highlights and suggests a related endpoint. It provides sufficient context for a listing tool with well-documented parameters, though user prerequisites (e.g., public profile or authentication) are not addressed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by mentioning projection_preset, data_fields, and item_fields for payload reduction, clarifying their purpose beyond the schema enums and descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns the list of story highlight reels on a user's profile, using specific verbs ('Returns') and resource ('story highlight reels'). It distinguishes from sibling 'instagram_get_highlight_stories' by mentioning that endpoint is for getting individual stories within a highlight.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions using the highlight endpoint for individual stories, providing a clear alternative. However, it does not explicitly state when not to use this tool or provide prerequisites like authentication or rate limits.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
instagram_get_user_postsInstagram: Get user postsAInspect
Returns a paginated list of posts from an Instagram user's profile feed. Maximum 12 posts per page. Includes photos, videos, carousels, and engagement counts. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of posts to fetch (maximum: 12) Default: 12 | |
| cursor | No | Pagination cursor | |
| username | Yes | Instagram username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are partially provided (readOnlyHint=false implies mutation is possible, but destructiveHint=false suggests non-destructive). The description adds that the tool is paginated and returns specific content types, but does not disclose rate limits, authentication needs, or any side effects beyond data retrieval. The annotations indicate it may not be purely read-only, but the description lacks detail on write behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at two sentences, front-loading key information (returns paginated list, max 12 posts, content types) before mentioning optional parameters. Every sentence adds value with no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the moderate complexity (6 parameters, one required) and full schema coverage, the description is complete enough. It covers pagination, content types, and payload reduction options. There is no output schema, but the description hints at the response structure (orderedItems). The sibling context is large, but the tool's purpose is clear within that context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers all parameters with descriptions and enums, achieving 100% coverage. The description mentions projection_preset, data_fields, and item_fields but does not add meaning beyond the schema. Baseline of 3 is appropriate since schema already explains each parameter sufficiently.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Returns', the resource 'posts from an Instagram user's profile feed', and specifies the scope 'paginated list' with a maximum of 12 posts per page. It distinguishes from sibling tools by focusing on user profile posts, unlike location or highlight tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving a user's feed posts and specifies a maximum of 12 per page. However, it does not explicitly state when not to use this tool or mention alternatives for other Instagram post types like reels or tagged posts, though siblings provide such distinction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
instagram_get_user_reelsInstagram: Get user reelsAInspect
Returns a paginated list of reels from an Instagram user's profile. Maximum 12 reels per page. Includes video URLs, captions, and engagement counts. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of reels to fetch (maximum: 12) Default: 12 | |
| cursor | No | Pagination cursor | |
| username | Yes | Instagram username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=false and destructiveHint=false, indicating a mutation but non-destructive operation. The description adds that the tool supports pagination and projection_preset for payload reduction, which is useful context beyond annotations. However, it does not discuss rate limits, authentication needs, or error behaviors, which would enhance transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences that efficiently convey purpose, constraints, and key features. It is front-loaded with the main action and includes all critical details without unnecessary text. Slightly more structure (e.g., bullet points) could improve readability but is not required.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the moderate complexity (6 parameters, all documented) and no output schema, the description covers the tool's purpose, pagination, and projection options. It mentions engagement counts but does not detail the return structure; however, the schema's parameter descriptions for data_fields and item_fields provide some insight into output fields. The description is complete enough for typical usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds contextual detail by mentioning projection_preset, data_fields, and item_fields for payload reduction, but these are also described in the schema. Thus, the description adds marginal value; baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns a paginated list of reels from an Instagram user's profile, specifying the resource (reels), source (user's profile), and key fields (video URLs, captions, engagement counts), which distinguishes it from sibling tools like Instagram posts or stories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to retrieve user reels) and mentions the maximum page size (12) and pagination via cursor. However, it does not explicitly state when not to use it or differentiate from other Instagram tools, though the tool name and sibling list imply its specific niche.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
instagram_get_user_taggedInstagram: Get user tagged postsAInspect
Returns Instagram posts where the user has been tagged by other accounts. Maximum 12 posts per page. Includes full post details and engagement counts. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of posts to fetch (maximum: 12) Default: 12 | |
| cursor | No | Pagination cursor | |
| username | Yes | Instagram username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key behavioral traits: maximum 12 posts per page, includes full post details and engagement counts, supports payload reduction. Annotations indicate not read-only (readOnlyHint false), not destructive (destructiveHint false), and idempotentHint false. Description does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, no fluff. Front-loaded with main purpose. Could be slightly more structured but overall concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 6 parameters with 100% schema coverage, no output schema, and moderate complexity. Description covers main behavior (tagged posts, limit, payload reduction) adequately for agent selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% but description adds minimal semantics beyond schema. It mentions projection_preset and data_fields/item_fields for payload reduction, which adds value, but does not elaborate on their usage in detail. Scalars are adequately described in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns Instagram posts where the user is tagged by others. Verb 'get' plus resource 'user tagged posts' is specific. Distinguishes from sibling tools like instagram_get_user_posts by specifying 'tagged' posts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for retrieving tagged posts but does not explicitly say when to use this over alternatives like instagram_search_media or instagram_get_user_posts. No when-not-to-use guidance, though sibling names provide some context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
instagram_search_mediaInstagram: Search mediaAInspect
Searches Instagram for reels and videos matching a keyword. Maximum 24 results per page. Returns video details including captions and engagement counts. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of results to fetch (maximum: 24) Default: 24 | |
| query | Yes | Search query | |
| cursor | No | Pagination cursor | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations show no read-only, idempotent, or destructive hints (readOnlyHint, idempotentHint, destructiveHint all false). The description adds transparency by stating it returns video details (captions, engagement) and supports payload reduction features (projection_preset, data_fields, item_fields). This complements annotations, which are sparse. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is three concise sentences with no wasted words. Each sentence adds unique value: what it does, result limit, return details, and payload reduction options.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema and 6 parameters (1 required), the description covers the core functionality, pagination limit, and customization options. It is missing explicit return format details (e.g., structure of results), but the description of 'returns video details including captions and engagement counts' provides sufficient context. A slightly lower score due to lack of output schema and no mention of error handling or rate limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents all parameters. The description adds context that the tool supports projection_preset, data_fields, and item_fields for payload reduction, which is a helpful summary. However, it does not detail how each parameter interacts. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches for reels and videos by keyword. It differentiates from siblings like instagram_get_user_reels by focusing on keyword search, but doesn't explicitly distinguish from other search tools (though none exist in siblings for Instagram).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for keyword-based search of reels and videos. It mentions a maximum of 24 results per page and pagination via cursor. However, it does not explicitly state when to use this tool versus alternatives like instagram_get_user_reels or instagram_search_media (only one search tool). No when-not-to guidelines.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_audioTiktok: Get audio detailsAInspect
Returns metadata for a TikTok audio track including title, artist, duration, usage count, and cover image. Look up any sound by its audio ID. Supports projection_preset and data_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| audioId | Yes | TikTok audio/music ID | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are minimal (no readOnlyHint, idempotentHint, or destructiveHint), so the description carries the transparency burden. It accurately describes a non-destructive read operation (returning metadata), which is consistent with the annotations. Additional details about payload reduction features (projection_preset, data_fields) provide beyond-basic context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences) and front-loaded with the core purpose. Every sentence adds value: first states what it returns, second explains how to access and optimize payload. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there is no output schema, the description adequately summarizes return fields. It covers primary metadata and reduction options, which is sufficient for audio metadata retrieval. However, it could mention pagination or rate limits if applicable, and does not explain that the tool is read-only (though annotations hint at it).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and parameters are well-described in the schema. The description adds minimal parameter semantics by mentioning 'audio ID' and 'projection_preset/data_fields for payload reduction', but does not significantly enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies a clear verb ('Returns') and resource ('metadata for a TikTok audio track'), listing exact fields like title, artist, duration, usage count, and cover image. It also clarifies the lookup method ('by its audio ID') and mentions payload reduction features, distinguishing it from sibling tools that focus on videos or users.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving audio metadata but offers no guidance on when to use this tool versus alternatives like tiktok_get_audio_videos (which retrieves videos using that audio). No exclusion criteria or alternative tool names are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_audio_videosTiktok: Get videos with audioAInspect
Returns a paginated list of TikTok videos using a specific audio track. Maximum 30 per page. Discover trending content by sound or music. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of videos to fetch (maximum: 30) Default: 30 | |
| cursor | No | Pagination cursor Default: 0 | |
| audioId | Yes | TikTok audio/music ID | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations specify readOnlyHint=false (not read-only, but not explicitly destructive), destructiveHint=false, idempotentHint=false. Description adds pagination details (max 30 per page) and mentions payload reduction options. No annotation contradiction. Could benefit from mentioning rate limits or whether this is a write operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences – clear, no fluff. Front-loaded with purpose and limit. Could be slightly more structured (e.g., list format for parameters).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 6 parameters with 100% schema coverage and no output schema. Description covers pagination and field selection. Lacks return value description, but no output schema exists. Adequate for this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so schema already describes all parameters. Description adds value by summarizing the 'payload reduction' purpose of projection_preset, data_fields, item_fields, but does not provide examples or further explanation. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool returns a paginated list of TikTok videos using a specific audio track. The verb 'get videos with audio' distinguishes it from other TikTok tools like tiktok_get_video (single video) or tiktok_get_tag_videos (videos by tag).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description mentions 'Discover trending content by sound or music' implying when to use it. However, no explicit alternatives or when-not-to-use guidance is provided. Sibling tools indicate other video listing tools (by user, tag, etc.) but description does not differentiate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_collection_videosTiktok: Get collection videosAInspect
Returns a paginated list of videos in a TikTok collection (playlist or mix). Maximum 35 per page. Includes full video details and engagement counts. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of videos to fetch (maximum: 35) Default: 20 | |
| cursor | No | Pagination cursor Default: 0 | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| collectionId | Yes | TikTok collection ID (mix ID) | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false (potential mutation? but description suggests read-only), openWorldHint=true (output may vary), and destructiveHint=false. The description adds context about pagination limits and payload reduction options, which are not in annotations. No contradiction detected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences) and front-loaded with the key purpose. Every sentence adds value, though it could be slightly more structured (e.g., bullet points).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 6 parameters, 100% schema coverage, no output schema, and the complexity of TikTok collections, the description adequately states the purpose and pagination limit. It could mention the default projection preset and possible return structure, but is sufficient for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description mentions projection_preset, data_fields, and item_fields for payload reduction, but does not add significant meaning beyond the schema's descriptions. The parameters are well-documented in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a paginated list of videos in a TikTok collection, including full video details and engagement counts. It distinguishes itself from sibling tools like tiktok_get_user_videos and tiktok_get_tag_videos by specifying the resource type (collection).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching videos from a playlist or mix, and specifies a maximum of 35 per page, which guides pagination. However, it does not explicitly state when not to use this tool or mention alternatives among the many tiktok_ sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_comment_repliesTiktok: Get comment repliesBInspect
Returns a paginated list of replies to a TikTok comment. Maximum 50 per page. Includes author info, like counts, and reply timestamps. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of replies to fetch (maximum: 50) Default: 50 | |
| cursor | No | Pagination cursor Default: 0 | |
| videoId | Yes | TikTok video ID | |
| commentId | Yes | TikTok comment ID | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate it's not readonly (readOnlyHint: false), not idempotent, and not destructive. The description adds pagination details and payload reduction, but doesn't disclose rate limits, authentication needs, or error behavior. Acceptable given annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each providing key information: functionality, limits, and customizability. No filler. Could be slightly more concise by combining sentences, but efficient overall.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, pagination, projection presets) and no output schema, the description covers core functionality but lacks information about return format structure (e.g., how pagination appears in response). Adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are adequately described in the schema. The description adds context about maximum 50 per page and the purpose of projection fields, but does not add significant meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly returns a paginated list of replies to a TikTok comment, with specific details on what's included (author info, likes, timestamps). Distinguishes itself from sibling tools like tiktok_get_video_comments, but the purpose is still clear and specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states maximum per page and mentions payload reduction parameters, but does not explicitly state when to use this vs. other comment-related tools (e.g., tiktok_get_video_comments). Implied usage but no exclusions or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_tag_videosTiktok: Get videos with tagAInspect
Returns TikTok videos associated with a hashtag or challenge. The tag name is resolved to an internal ID automatically. Includes video details and engagement counts. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of videos to fetch (maximum: 30) Default: 30 | |
| cursor | No | Pagination cursor Default: 0 | |
| tagName | Yes | Tag name | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint false, openWorldHint true, no destruction; description adds that the tag name is automatically resolved to an internal ID, which is a key behavioral detail not in annotations. It also mentions support for projection_preset and field reduction, which helps set expectations about payload control.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is three sentences, all relevant and front-loaded with the core purpose. Every sentence adds value, but the mention of projection_preset could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the schema covers all parameters and annotations provide key hints, the description is adequate. The tool is a straightforward retrieval with no output schema, so the description covers the essential behavior (tag resolution, payload reduction) without gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds limited value beyond the schema. The description briefly mentions projection_preset and data_fields/item_fields, but does not elaborate on their usage or defaults beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns TikTok videos associated with a hashtag or challenge, using specific verbs "returns" and "resolved." It distinguishes from siblings like tiktok_get_audio_videos and tiktok_get_collection_videos by specifying the resource type (tag videos).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching tag-associated videos, but does not explicitly state when not to use it or compare with alternatives like tiktok_search_videos. However, the sibling list provides context for an agent to infer differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_userTiktok: Get user profileAInspect
Returns profile information for a TikTok user including bio, follower counts, verification status, and profile picture. Look up any public TikTok account by username. Supports projection_preset and data_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| username | Yes | TikTok username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint: false (unlikely read-only since fetching profile), openWorldHint: true (no fixed set of data), idempotentHint: false (may not be idempotent), destructiveHint: false (safe). The description adds behavioral context about payload reduction via projection_preset and data_fields, which is helpful beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences covering purpose, usage, and advanced features. No wasted words. Information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple profile-lookup tool with good schema coverage and no output schema, the description is adequate. It covers purpose, parameters, and a tip. Could mention that data_fields works with projection_preset but still complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents all 3 parameters. The description adds value by explaining the benefit of projection_preset and data_fields ('payload reduction'), which goes beyond the enum and type info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns TikTok user profile information including specific fields (bio, follower counts, verification status, profile picture) and explains the lookup mechanism (by username). It also distinguishes from sibling tools by focusing on the top-level profile rather than videos or collections.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (to get user profile info) but doesn't explicitly state when not to use it or provide alternatives among the many TikTok sibling tools. However, given the diverse sibling names, the purpose is relatively clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_user_collectionsTiktok: Get user collectionsAInspect
Returns a paginated list of video collections (playlists and mixes) on a TikTok user's profile. Includes collection name, cover image, and video count. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of collections to fetch (maximum: 30) Default: 30 | |
| cursor | No | Pagination cursor Default: 0 | |
| username | Yes | TikTok username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate openWorldHint, but no contradictory or additional behavioral details missing beyond annotations. Description includes pagination and payload reduction features.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, efficiently front-loads key output and customizations. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description covers purpose, pagination, and customization options adequately but lacks explicit enumeration of return fields or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so description adds marginal value. The description mentions projection_preset, data_fields, and item_fields but does not add detail beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns paginated collections (playlists/mixes) on a TikTok user's profile with specific fields. However, it does not differentiate from sibling tools like tiktok_get_user_videos or tiktok_get_collection_videos.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for retrieving user collections, but no explicit guidance on when to use this vs other tiktok_get_user_* tools or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_user_followersTiktok: Get user followers listAInspect
Returns a paginated list of accounts following a TikTok user. Maximum 30 per page. Includes profile details for each follower account. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of users to fetch (maximum: 30) Default: 30 | |
| cursor | No | Pagination cursor Default: 0 | |
| username | Yes | TikTok username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate openWorldHint=true (result may vary) and destructiveHint=false, which aligns with a read-like operation. The description mentions pagination and max 30, but doesn't clarify rate limits or potential errors beyond schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single short paragraph with key details upfront. Could omit 'projection_preset, data_fields, and item_fields for payload reduction' as those are in schema, but overall concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and 6 parameters, the description covers the basic purpose and key constraints. Lacks description of return structure or pagination details (e.g., how cursor works). Adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented. The description adds context for projection_preset and data_fields/item_fields for payload reduction, but these are also in schema. No additional meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns a paginated list of followers of a TikTok user, with specifics like max 30 per page and profile details. This distinguishes it from sibling tools like tiktok_get_user_following or tiktok_get_user_videos.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes max per page and mention of projection_preset, but does not compare alternatives or state when not to use it. No explicit guidance on when to prefer this over tiktok_get_user_following, for example.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_user_followingTiktok: Get user following listAInspect
Returns a paginated list of accounts a TikTok user follows. Maximum 30 per page. Includes profile details for each followed account. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of users to fetch (maximum: 30) Default: 30 | |
| cursor | No | Pagination cursor Default: 0 | |
| username | Yes | TikTok username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false (so may not be read-only) but destructiveHint=false, so safe. Description adds that it returns paginated list with profile details. Discloses max 30 per page. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no extraneous info. Front-loaded: purpose, pagination limit, includes profile details, payload reduction options. Every sentence is necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains return includes paginated list with profile details. Could mention that cursor is string and default 0, but schema already covers that. Overall sufficient for a list endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% but description adds value by explaining payload reduction via projection_preset, data_fields, item_fields. Also clarifies max count (30) and default cursor.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns a paginated list of accounts a TikTok user follows, differentiating from sibling tools like tiktok_get_user_followers (get followers) and tiktok_get_user (get user profile). Describes specific resource: user's following list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions maximum 30 per page and pagination via cursor. Does not explicitly state when not to use or compare to alternatives, but the sibling list is large and context implies this is for listing following.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_user_likesTiktok: Get user liked videosBInspect
Returns a paginated list of videos liked by a TikTok user. Note: Users may have their likes set to private, in which case an empty list will be returned. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of videos to fetch (maximum: 35) Default: 35 | |
| cursor | No | Pagination cursor Default: 0 | |
| username | Yes | TikTok username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=false, but the description's mention of private likes returning empty list adds context. However, it does not disclose potential rate limiting or data freshness. The description partially compensates for missing behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences and 28 words, concise and front-loaded with the core purpose. The second sentence adds useful caveats and feature hints without bloat.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the schema's comprehensive parameter descriptions and no output schema, the description covers the main behavioral note (private likes) and the payload reduction feature. However, it lacks information about pagination behavior beyond an empty list case and does not mention typical error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All parameters have descriptions in the schema (100% coverage), so the description does not need to add parameter details. It mentions three optional fields (projection_preset, data_fields, item_fields) but does not explain the projection_preset options beyond listing them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns a paginated list of liked videos for a TikTok user. While it distinguishes itself from sibling tools like tiktok_get_user_videos, it could more explicitly differentiate from other user-specific endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions that private likes result in an empty list, which helps set expectations, but it does not provide explicit guidance on when to use this versus other tools, such as tiktok_get_user_videos, or mention prerequisites like authentication.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_user_liveTiktok: Get user live streamAInspect
Returns the current live stream for a user, including stream URLs and viewer count. Returns 404 if the user is not currently live. Supports projection_preset and data_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| username | Yes | TikTok username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses read-only behavior (consistent with readOnlyHint=false but openWorldHint=true) and that it returns 404 if not live, which is important for error handling. Annotations already indicate no destructive action, but description adds specific error condition.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Only three sentences, each adding distinct value: what it returns, special error case, and available parameter options. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple parameters, description is sufficient. Could mention that it requires username string, but that's obvious from required field. Missing mention of rate limits or auth, but acceptable for a read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description mentions projection_preset and data_fields for payload reduction but doesn't add meaning beyond schema (e.g., no examples of how to use them). Adequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns current live stream for a user, including specific data (stream URLs, viewer count) and error condition (404 if not live). Distinguishes from siblings like tiktok_get_user or tiktok_get_user_videos by focusing specifically on live streaming.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage when you need live stream data for a specific user. While it doesn't explicitly contrast with siblings, the description and name make it clear this is for live streams only, not regular videos or user info.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_user_repostsTiktok: Get user repostsAInspect
Returns a paginated list of videos reposted by a TikTok user. Note: Users may have their reposts set to private, in which case an empty list will be returned. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of videos to fetch (maximum: 30) Default: 30 | |
| order | No | Sort order: newest (default), popular, or oldest Default: newest | |
| cursor | No | Pagination cursor Default: 0 | |
| username | Yes | TikTok username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=false and readOnlyHint=false (though readOnlyHint is not explicitly set), so the description doesn't need to reaffirm safety. It adds value by noting that private reposts return empty lists, which is a key behavioral trait. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Short and front-loaded: three sentences. The first sentence states the purpose, the second adds a caveat, the third lists optional field filtering. No fluff, though it could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters (1 required), 100% schema coverage, and no output schema, the description adequately explains the main behavior and edge case (private reposts). It provides essential cues for pagination (cursor) and payload reduction, making it sufficient for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters, so baseline is 3. The description does not add additional meaning beyond the schema descriptions, e.g., it doesn't explain the default pagination behavior or format. So no extra credit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns a paginated list of reposted videos for a TikTok user. The description distinguishes it from sibling tools like tiktok_get_user_videos (user's own videos) and tiktok_get_user_likes (liked videos) by specifying 'reposted'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a usage note about private reposts leading to empty results, which helps agents handle failures. It also mentions support for projection_preset and field filtering for payload reduction, but does not explicitly contrast with alternative tools for fetching user content.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_user_storiesTiktok: Get user storiesAInspect
Returns a paginated list of active stories for a TikTok user. Stories expire after 24 hours and include both images and videos with engagement data. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of stories to fetch (maximum: 35) Default: 4 | |
| cursor | No | Pagination cursor Default: 0 | |
| username | Yes | TikTok username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that stories expire after 24 hours and include images/videos with engagement data. Annotations show readOnlyHint=false but this is consistent with a read operation (stories are fetched not modified). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each adding distinct value: purpose+expiration, then payload details. Efficient, though the acronym 'TikTok' could be considered redundant with the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 100% schema coverage, no output schema, and no nested objects, the description adequately covers purpose, constraints, and custom fields. Missing return structure but not critical when schema covers parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema descriptions already cover 100% of parameters, so baseline is 3. Description adds value by naming engagement data and payload reduction via presets/fields, clarifying purpose of projection presets beyond enum listing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns a paginated list of active stories for a TikTok user. Distinguishes from siblings like tiktok_get_user_videos by specifying 'stories' and adding expiration detail.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context (fetch active stories) but does not explicitly exclude alternatives or provide when-not-to-use guidance. However, the expiration note and pagination info help the agent decide when to call.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_user_videosTiktok: Get user videosAInspect
Returns a paginated list of videos from a user's profile. Supports sorting by newest, popular, or oldest. Maximum 35 videos per page. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of videos to fetch (maximum: 35) Default: 35 | |
| order | No | Sort order: newest (default), popular, or oldest Default: newest | |
| cursor | No | Pagination cursor Default: 0 | |
| username | Yes | TikTok username (with or without @ symbol) | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is not read-only (readOnlyHint false), open-world (openWorldHint true), not idempotent, and not destructive. The description adds behavioral insights: maximum 35 videos per page, supports pagination via cursor, and payload reduction via projection_preset and fields. This adds value beyond annotations, but could disclose rate limits or API-specific constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences: purpose, sorting options, max videos, and payload reduction. It is concise and front-loaded with the core purpose. One sentence could be omitted if the schema already covers parameters, but it remains lean and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description should not explain return values in detail. It covers the key purpose and features (pagination, sorting, limiting, payload reduction). The tool has 7 parameters, 1 required, and moderate complexity. The description feels complete for an agent to select and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all 7 parameters have descriptions in the schema. The tool description adds no new parameter information beyond summarizing the purpose. It mentions three param names (projection_preset, data_fields, item_fields) but only generally. Baseline score is 3 due to high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states this tool returns a paginated list of videos from a user's profile, using the specific verb 'Returns' and resource 'videos from a user's profile'. It distinguishes itself from sibling tools like tiktok_get_user_followers or tiktok_get_user_likes by focusing on videos, and from tiktok_get_video by operating on a user's full profile rather than a single video.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to get videos from a user's profile. It mentions sorting options and pagination, implying usage for browsing or fetching a set of videos. However, it doesn't explicitly state when not to use it or name alternatives, though siblings like tiktok_get_user_likes or tiktok_get_video exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_videoTiktok: Get video detailsAInspect
Returns details for a single TikTok video including engagement counts, media files in multiple qualities, audio track, author info, and hashtags. Supports projection_preset and data_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| videoId | Yes | TikTok video ID | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false (not read-only), destructiveHint=false (not destructive), and idempotentHint=false (not idempotent). Despite minimal annotation coverage, the description adds value by specifying the returned data categories and optimization features. It does not explicitly state behavior changes or side effects, which are unnecessary given the non-destructive, non-read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is two sentences, efficiently covering purpose, return data, and customization options. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 3 parameters with 100% schema coverage, no output schema, and moderate complexity. The description mentions key data fields and optimization options, which is sufficient for an agent to understand the tool's use. Could be slightly more detailed about behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully described in the schema. The description adds mention of 'data_fields' and 'projection_preset' for payload reduction, but does not provide deeper semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns details for a single TikTok video, listing specific data categories (engagement counts, media files, audio, author, hashtags). This differentiates it from sibling tools like tiktok_get_audio or tiktok_get_user_videos.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions payload reduction options but does not provide explicit guidance on when to use this tool vs alternatives like tiktok_get_video_comments or tiktok_search_videos. Implied use is for a single video's complete details.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_video_commentsTiktok: Get video commentsAInspect
Returns a paginated list of top-level comments on a video. Maximum 50 per page. Use the replies endpoint to fetch threaded replies. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of comments to fetch (maximum: 50) Default: 20 | |
| cursor | No | Pagination cursor Default: 0 | |
| videoId | Yes | TikTok video ID | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes pagination limit (50 per page) and payload reduction features, beyond annotations which only provide readOnlyHint false and idempotentHint false. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences covering all essential points without redundancy or extraneous details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides enough information for an AI agent to understand usage and restrictions. Lacks output schema but pagination details are clear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all parameters. Description adds context about pagination limit and payload reduction but does not significantly enhance beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns top-level comments for a video, includes pagination and a link to the replies endpoint. Differentiates from sibling tools like tiktok_get_comment_replies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions using the replies endpoint for threaded replies, providing a clear usage context. No explicit when-not-to-use or alternatives beyond that.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_get_video_transcriptTiktok: Get video transcriptAInspect
Returns the transcript for a video in a specific language. Supports both auto-generated (ASR) and machine-translated subtitles. Returns WebVTT format. Supports projection_preset and data_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| videoId | Yes | TikTok video ID | |
| language | Yes | BCP47 language code | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that it returns WebVTT format and that subtitles can be auto-generated or machine-translated. Annotations indicate openWorldHint=true, consistent with a read-only lookup. DestructiveHint=false aligns with no mutation. The description adds valuable behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loading the core purpose and then adding key details. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 params, no output schema), the description is reasonably complete. It covers purpose, output format, subtitle types, and payload optimization. It could mention that videoId is required and language codes are BCP47, but the schema already includes that.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the description adds limited extra meaning. It mentions WebVTT format and subtitle types, which are not in schema descriptions. It also explains that projection_preset and data_fields reduce payload, providing context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns the transcript for a video in a specific language, with details on subtitle types (ASR and machine-translated) and output format (WebVTT). It distinguishes from sibling tools like tiktok_get_video, which likely returns video metadata, and tiktok_search_videos, which searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions supports for auto-generated and machine-translated subtitles and payload reduction via projection_preset and data_fields, but does not explicitly say when to use this tool over alternatives. However, the context is clear enough for an agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_search_contentTiktok: Search contentAInspect
General search that returns mixed results including videos and user profiles. Supports sorting and publish-time filters. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of items to fetch (maximum: 100, actual results may vary) Default: 50 | |
| order | No | Sort order: relevance (default), most-liked, or date-posted Default: relevance | |
| query | Yes | Search query | |
| cursor | No | Pagination cursor Default: 0 | |
| published | No | Filter by publish time: all-time (default), yesterday, this-week, this-month, last-3-months, or last-6-months Default: all-time | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false, destructiveHint=false, and openWorldHint=true, suggesting it makes external requests. The description adds that it returns 'mixed results' and supports payload reduction, which is useful. However, it does not disclose details like rate limits, authentication needs, or that results may vary (though the schema's count description hints at actual results varying). With annotations already present, the description provides moderate additional context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three concise sentences, each adding distinct value: first states the tool's core purpose, second lists key features, third lists specific payload options. No wasted words; front-loaded with essential info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 8 parameters (1 required), no output schema, and annotations, the description adequately explains the tool's behavior and parameters. It mentions sorting, filters, and payload reduction. However, it could briefly note pagination or that results are across content types (videos+users). Minor gap for a mixed-search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers all 8 parameters with descriptions (100% schema_description_coverage). The description mentions support for 'projection_preset, data_fields, and item_fields for payload reduction,' which adds context beyond the schema's individual descriptions. However, since schema already covers each parameter, the description's added value is moderate; baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'General search that returns mixed results including videos and user profiles.' This specifies the verb (search) and resource (content, i.e., videos and user profiles). However, it does not explicitly distinguish from sibling tools like tiktok_search_videos or tiktok_search_users, though the 'mixed results' phrase implies it combines both, setting it apart.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions features like 'supports sorting and publish-time filters' and 'supports projection_preset... for payload reduction,' giving context on when to invoke optional parameters. However, it does not explicitly state when to use this tool over sibling search tools (e.g., tiktok_search_videos or tiktok_search_users), nor does it provide exclusions or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_search_usersTiktok: Search usersAInspect
Searches TikTok for user profiles matching a query. Fixed page size of 10 (platform limit). Returns profile details including follower counts and verification status. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Page size (fixed at 10 by the platform) Default: 10 | |
| query | Yes | Search query | |
| cursor | No | Pagination cursor Default: 0 | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and idempotentHint=false, but destructiveHint=false and openWorldHint=true. The description explains fixed page size (10) and payload reduction options, which adds behavioral context. However, it does not mention that the tool performs a read operation (despite open world) or any mutation implications, which is consistent with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences: purpose, fixed page size, return profile details, and payload reduction options. It is concise and front-loaded with the main action. No wasted words, but could be more structured with bullets for parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (6 params, no output schema), the description covers the main purpose, pagination constraint, return fields (follower counts, verification), and payload reduction. Missing details on pagination via cursor or how to use it, but the schema covers cursor. Completeness is good for a search tool with high schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter having a description. The description adds context that 'count' is fixed at 10 by the platform, which is not in the schema's description (which only says 'Page size (fixed at 10 by the platform)'). The description also summarizes projection_preset, data_fields, and item_fields for payload reduction, adding value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches TikTok for user profiles matching a query, distinguishing it from sibling tools like tiktok_search_videos (searches videos) and tiktok_search_content (generic search). The verb 'searches' and resource 'user profiles' are specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions a fixed page size of 10 as a platform limit, which implies pagination limits but does not explicitly guide when to use this tool versus alternatives like tiktok_get_user (which retrieves a specific user) or other search tools. No when-not-to-use or exclusion criteria given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_search_videosTiktok: Search videosAInspect
Searches TikTok for videos matching a query. Supports filtering by publish time and sorting by relevance, likes, or date. Supports projection_preset, data_fields, and item_fields for payload reduction.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of videos to fetch (maximum: 100, actual results may vary) Default: 50 | |
| order | No | Sort order: relevance (default), most-liked, or date-posted Default: relevance | |
| query | Yes | Search query | |
| cursor | No | Pagination cursor Default: 0 | |
| published | No | Filter by publish time: all-time (default), yesterday, this-week, this-month, last-3-months, or last-6-months Default: all-time | |
| data_fields | No | Optional. Add top-level data keys on top of the selected projection_preset. | |
| item_fields | No | Optional. Add item keys from data.orderedItems[] on top of the selected projection_preset. | |
| projection_preset | No | Optional. Defaults to "minimal". Use one of: full, minimal, identity, engagement, or content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=false (meaning writes may occur) and destructiveHint=false (not destructive). The description adds that the tool supports pagination (cursor) and payload reduction features (projection_preset, data_fields, item_fields), which are behavioral traits beyond annotations. However, it does not mention write behavior or other side effects. With annotations doing partial work, the description adds moderate value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each adding distinct value: first states core purpose, second adds filtering/sorting options, third mentions payload reduction features. No unnecessary words. Slightly verbose for mentioning all three payload reduction options explicitly, but still concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of 8 parameters and no output schema, the description covers key features (filters, sorting, payload reduction) but lacks details about return structure, pagination behavior, or rate limits. The context signals show high schema coverage, so the description is adequate but not fully complete for a search tool with many options.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so every parameter already has description in the schema. The description adds high-level semantics by mentioning filtering and sorting, but does not add meaning beyond what schema provides. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Searches' and the resource 'videos on TikTok'. It distinguishes from siblings like tiktok_search_content and tiktok_search_users by specifying it searches videos, and the title explicitly says 'Search videos'. The description also adds specific features like filtering and sorting, which sets clear expectations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching videos, but does not explicitly state when to use this tool versus alternatives like tiktok_search_content (which might search broader content). No exclusions or when-not-to-use guidance is provided. The description is adequate but lacks comparative context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!