Skip to main content
Glama

listCollections

List local transcript collections with search focus and video counts to manage YouTube intelligence data.

Instructions

List local transcript collections, active search focus, and indexed video/chunk counts.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
includeVideoListNo

Implementation Reference

  • The tool handler in the MCP server dispatch logic for "listCollections", which delegates to the YouTubeService.
    case "listCollections":
      return service.listCollections({
        includeVideoList: optionalBoolean(args, "includeVideoList"),
      });
  • The actual database implementation of the listCollections tool, fetching collections from the SQLite database.
    listCollections(includeVideoList = false): ListCollectionsOutput {
      const rows = this.db.prepare(`
        SELECT
          c.collection_id,
          c.label,
          c.source_type,
          c.source_ref,
          c.source_title,
          c.source_channel_title,
          c.created_at,
          c.updated_at,
          (SELECT algorithm FROM collection_models m WHERE m.collection_id = c.collection_id) AS algorithm,
          COALESCE((SELECT COUNT(*) FROM collection_videos v WHERE v.collection_id = c.collection_id), 0) AS video_count,
          COALESCE((SELECT COUNT(*) FROM transcript_chunks ch WHERE ch.collection_id = c.collection_id), 0) AS total_chunks
        FROM collections c
        ORDER BY c.updated_at DESC, c.collection_id ASC
      `).all() as Array<{
        collection_id: string;
        label: string | null;
  • The registration definition of the "listCollections" tool in the MCP server tool list.
      name: "listCollections",
      description: "List local transcript collections, active search focus, and indexed video/chunk counts.",
      inputSchema: {
        type: "object",
        properties: {
          includeVideoList: { type: "boolean" },
        },
        additionalProperties: false,
      },
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what information is listed but lacks details on permissions, rate limits, response format, or whether the operation is read-only or has side effects. For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the main purpose without unnecessary words. Every part of the sentence contributes directly to understanding the tool's output, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional parameter, no output schema, no annotations), the description covers the basic purpose but lacks details on behavioral aspects like response format or operational constraints. It is minimally viable but incomplete for full contextual understanding, especially without annotations to fill gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter with 0% description coverage, but the tool has zero required parameters. The description does not mention the 'includeVideoList' parameter, but since no parameters are required and the schema is simple, the description adequately conveys the tool's core function without parameter details, compensating for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: listing local transcript collections, active search focus, and indexed video/chunk counts. It uses specific verbs ('list') and resources ('collections', 'focus', 'counts'), making the function unambiguous. However, it does not explicitly differentiate from sibling tools like 'listChannelCatalog' or 'listCommentCollections', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, exclusions, or compare it to sibling tools such as 'listChannelCatalog' or 'listCommentCollections', leaving the agent to infer usage context without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rajanrengasamy/vidlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server