Youtube
Server Details
YouTube MCP — wraps the YouTube Data API v3 (BYO API key)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-youtube
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 10 of 10 tools scored. Lowest: 3.1/5.
The tools are mostly distinct: Pipeworx utilities (ask_pipeworx, discover_tools, remember/recall/forget) serve different meta-purposes, and YouTube tools cover channels, videos, comments, and search. However, ask_pipeworx overlaps somewhat with discover_tools and the yt_* tools since it can answer questions about YouTube data, creating slight ambiguity.
The naming conventions are mixed: Pipeworx tools use verb-like names (ask_, discover_, remember, recall, forget) while YouTube tools use a consistent 'yt_' prefix followed by descriptive nouns (yt_channel_details, yt_channel_videos, yt_search, yt_video_comments, yt_video_details). The two groups follow different patterns, reducing overall consistency.
With 10 tools, the count is appropriate for a server combining a meta-toolkit (5 tools for memory and discovery) and a YouTube data API (5 tools). Each group is well-scoped, though the Pipeworx memory tools could be considered a separate concern.
The YouTube tools cover essential operations: search, channel details, video details, video listing, and comments. Missing are uploads, playlists, or subscription management, but these are advanced. The memory tools provide basic CRUD (create, read, delete) but lack an update operation. Overall, the set is mostly complete for common use cases.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool 'picks the right tool' and 'fills the arguments,' implying automated action, which is beyond the minimal schema. Since no annotations are provided, the description carries the burden and does a decent job but does not detail potential side effects, rate limits, or data sources used.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (3 sentences) and front-loaded with the main purpose. Examples add clarity without being verbose. Slight redundancy in 'no need to browse tools or learn schemas' could be trimmed, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema, no nested objects), the description adequately covers what the agent needs: how to ask questions and what to expect. It does not explain return format, but that is acceptable without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage and only one parameter described in the schema as 'Your question or request in natural language,' the description adds value by explaining how to use it (plain English, no need to browse tools) and giving concrete examples, which enriches the parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that this tool answers plain English questions by selecting the best data source, filling arguments, and returning results. It distinguishes itself from sibling tools that are specific to YouTube or memory operations by focusing on generic query resolution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'no need to browse tools or learn schemas' and provides example questions, guiding the user on when to use this tool. However, it does not explicitly state when not to use it or mention alternatives, though the sibling tools list provides implicit context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description reveals it searches by natural language description and returns tool names and descriptions. It also implies ranking by relevance. Since annotations are absent, the description carries full burden, but it lacks details on whether results are sorted, paginated, or whether the tool has any side effects. However, for a read-only search tool, these are minor gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each carrying important information: purpose, output, and usage guidance. No wasted words. Front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has no output schema and is a simple search tool, the description is complete enough. It explains what it does, when to use it, and what it returns. No gaps for its intended use case.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage, so baseline is 3. The description does not add extra meaning beyond what the schema provides, but it does give an example for the 'query' parameter ('e.g., "analyze housing market trends"'), which is helpful but not required given the schema's description already provides a similar example.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches a tool catalog by natural language query and returns the most relevant tools. It uses specific verbs ('Search', 'Returns') and distinguishes itself as a discovery tool to be called first when many tools are available.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task', providing clear when-to-use guidance. This contrasts with sibling tools which are likely task-specific, implying the tool should be used for selection rather than execution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description carries full burden. It states the action is destructive (delete) but does not disclose irreversibility, permissions, or effects on related data. A 2 is appropriate as it only conveys the basic destructive nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence, 7 words, zero waste. Every word is necessary and informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, no output schema, no nested objects), the description is adequate but minimal. It covers the basic purpose but lacks behavioral details like confirmation, reversibility, or error states that would aid an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the single parameter is described in schema as 'Memory key to delete'. Description adds 'by key', confirming usage, but does not add new semantic information beyond the schema. Baseline 3 is correct.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Delete a stored memory by key', which is a specific verb (Delete) and resource (stored memory), and uniquely distinguishes from siblings like 'recall' (retrieve) and 'remember' (store).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage when needing to delete a memory, but provides no guidance on when not to use it or alternatives. For a destructive action, more context would be helpful.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes the tool's behavior: retrieving a single memory by key or listing all when key is omitted. No annotations provided, so the description carries the full burden. It could mention persistence (session vs cross-session) but does well overall.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. Front-loaded with action and resource, second sentence clarifies use context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description hints at return value (the stored memory or list of keys) but doesn't detail format. For a simple retrieval tool, this is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description adds context: omitting key lists all. However, the description doesn't elaborate on the format of the key or what happens if key doesn't exist, but the schema already describes it as a string.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Retrieve' and the resource 'stored memory by key', and distinguishes between retrieving a specific key and listing all memories. The description is precise and leaves no ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use the tool: 'to retrieve context you saved earlier' and mentions that omitting key lists all memories. However, it does not explicitly contrast with siblings like 'remember' or 'forget', though the use case is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses memory duration (24 hours for anonymous, persistent for authenticated) which is key behavioral context beyond the basic store operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each earning its place: what it does, when to use, and behavioral nuance. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple key-value store with no output schema and good schema coverage, description is complete. Could mention overwrite behavior or key naming conventions, but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description does not add parameter-level detail beyond schema examples, but the schema already provides good descriptions and examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb (store), resource (key-value pair), and context (session memory). It distinguishes from siblings like recall and forget by specifying the write operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description gives concrete examples of when to use (save findings, preferences, context) and notes persistence differences for authenticated vs anonymous users. However, no explicit exclusion or comparison to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
yt_channel_detailsAInspect
Get YouTube channel information and statistics including subscriber count, video count, view count, description, and custom URL.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | YouTube Data API v3 key from Google Cloud Console | |
| username | No | Channel username (alternative to channel_id) | |
| channel_id | No | Channel ID (starts with UC...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It states that the tool retrieves channel information and statistics, which is a read operation. However, it does not disclose any limitations, such as API rate limits, quota costs, or that only public channel data is returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise and front-loaded with the purpose. It lists specific data points, which is helpful. No extra words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is relatively simple (no output schema, 3 parameters). The description covers the main purpose and return data, but could mention that at least one of username or channel_id is needed for the query.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description does not add meaning beyond what the schema already provides; it only lists data returned but not parameter-specific semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves YouTube channel information and statistics, listing specific data points like subscriber count, video count, view count, description, and custom URL. It uses a specific verb ('Get') and resource ('YouTube channel information'), and distinguishes from sibling tools like 'yt_channel_videos' and 'yt_video_details'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching channel details but does not explicitly state when to use this tool versus alternatives like 'yt_channel_videos' or 'yt_search'. No exclusions or prerequisites are mentioned, though the required _apiKey parameter is a prerequisite.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
yt_channel_videosAInspect
List recent videos from a YouTube channel, ordered by date. Returns video ID, title, description, and publish date.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | YouTube Data API v3 key from Google Cloud Console | |
| channel_id | Yes | Channel ID to list videos from | |
| max_results | No | Number of videos to return (default 10, max 50) | |
| published_after | No | Filter videos published after this ISO 8601 date | |
| published_before | No | Filter videos published before this ISO 8601 date |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the ordering (by date) and return fields, which is good, but it does not mention that results are paginated (max 50), that an API key is required (already in schema), or any rate limits or costs. The behavior is partially transparent but not fully.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence plus a list of returned fields. Every word adds value with no redundancy. It is concise and front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters, no output schema, and no annotations, the description is adequate but not complete. It explains the returned fields but does not cover pagination behavior, error cases, or the meaning of the returned description field (which may be truncated). It also doesn't clarify that the API key parameter is required (already in schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so each parameter already has a description. The tool description adds no extra meaning beyond what the schema provides (e.g., no mention that published_after/before are optional filters or that max_results defaults to 10). Baseline 3 is appropriate as schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists recent videos from a YouTube channel ordered by date and specifies the returned fields (ID, title, description, publish date). It distinguishes itself from siblings like yt_search (which likely searches across all of YouTube) and yt_channel_details (which gets channel metadata, not videos).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates when to use this tool (to get a list of recent videos from a specific channel) but does not explicitly mention when not to use it or name alternative tools like yt_search for broader queries. Given the context of sibling tools, the usage context is reasonably clear but lacks exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
yt_searchAInspect
Search YouTube for videos, channels, or playlists. Returns snippet info including title, description, channel, thumbnails, and publish date.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Resource type to search: "video", "channel", or "playlist" (default: "video") | |
| order | No | Sort order: "date", "rating", "relevance", or "viewCount" (default: "relevance") | |
| query | Yes | Search term | |
| _apiKey | Yes | YouTube Data API v3 key from Google Cloud Console | |
| channel_id | No | Filter results to a specific channel ID | |
| max_results | No | Number of results to return (default 10, max 50) | |
| published_after | No | Filter results published after this ISO 8601 date (e.g. "2024-01-01T00:00:00Z") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions that the tool returns 'snippet info' and lists what's included (title, description, channel, thumbnails, publish date), but does not disclose potential rate limits, authentication requirements beyond the API key, or any side effects. The description is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence followed by a brief clarification of the return format. It is concise, front-loaded with the purpose, and every phrase adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 7 parameters (with 100% schema coverage) and no output schema. The description covers the return format, which partially compensates for the missing output schema. It is complete for a search tool but could mention pagination or the default number of results (already in schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are described in the schema. The description adds the context that results include 'snippet info' and lists fields, but does not add meaning beyond the schema for individual parameters. A score of 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Search'), resource ('YouTube'), and specific types ('videos, channels, or playlists'), distinguishing it from sibling tools like yt_channel_details and yt_video_details which focus on specific entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for general search across different resource types but does not explicitly state when to use this tool over alternatives like yt_channel_videos or yt_video_details. The sibling tools' names suggest more specific use cases, but the description does not address exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
yt_video_commentsBInspect
Get top-level comment threads on a YouTube video. Returns author, text, like count, and publish date.
| Name | Required | Description | Default |
|---|---|---|---|
| order | No | Sort order: "time" or "relevance" (default: "relevance") | |
| _apiKey | Yes | YouTube Data API v3 key from Google Cloud Console | |
| video_id | Yes | Video ID to fetch comments for | |
| max_results | No | Number of comment threads to return (default 20, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must convey behavioral traits. It correctly indicates a read operation (returns data) and specifies returned fields. However, it does not disclose that it only fetches top-level threads (not replies), rate limits, or authentication requirements beyond the API key.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, concise and front-loaded with the main action. It efficiently communicates the purpose and output fields with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and empty annotations, the description should compensate by explaining the return format, pagination, and potential errors. It only lists a few fields, missing details like the structure of the response, pagination through max_results, and that only top-level threads are returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes all parameters. The description adds no extra meaning to parameters beyond what the schema provides. The schema descriptions are clear, but the tool description does not enhance them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves top-level comment threads from a YouTube video, specifying the returned fields (author, text, like count, publish date). This distinguishes it from sibling tools like yt_search and yt_video_details, though it does not explicitly contrast with other comment-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., video must exist), limitations (e.g., only top-level threads, not replies), or context for use. The description simply states what it does without any usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
yt_video_detailsAInspect
Get detailed information about one or more YouTube videos including title, description, channel, duration, view/like/comment counts, and tags.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | YouTube Data API v3 key from Google Cloud Console | |
| video_id | Yes | Video ID or comma-separated list of video IDs (max 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must disclose behavior. It mentions retrieval of details (non-destructive), but does not address rate limits, API key requirements, or response size constraints. The description is adequate but lacks depth for a mutation-free tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the tool's purpose and output data. No unnecessary words, but could be slightly more structured (e.g., bullet points) for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does a fair job listing returned fields. However, it omits details like error handling, quota usage, or pagination for multiple IDs. The tool is simple but could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with both parameters described in the schema. The description does not add meaning beyond the schema, but baseline is 3 due to full coverage. The mention of 'comma-separated list' and 'max 50' in the schema is sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'detailed information about one or more YouTube videos' and lists specific data points (title, description, channel, duration, view/like/comment counts, tags). This verb+resource combination is precise and distinguishes it from siblings like yt_search or yt_channel_videos.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching video metadata but provides no explicit guidance on when to use this vs. alternatives like yt_search or yt_channel_videos. No exclusions or context for selecting this tool over siblings are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!