Skip to main content
Glama

searchComments

Search YouTube comment collections to find relevant comments by query, with results ranked by relevance and showing author, likes, and score.

Instructions

Search imported comment collections with ranked results. Returns matching comments with author, like count, and relevance score. Uses active comment collection by default.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query
collectionIdNoSpecific collection to search
maxResultsNo
minScoreNo
videoIdFilterNo
useActiveCollectionNo

Implementation Reference

  • The handler for the "searchComments" tool in the MCP server, which delegates the call to the service layer.
    case "searchComments":
      return service.searchComments({
        query: readString(args, "query"),
        collectionId: optionalString(args, "collectionId"),
        maxResults: optionalNumber(args, "maxResults"),
        minScore: optionalNumber(args, "minScore"),
        videoIdFilter: optionalStringArray(args, "videoIdFilter"),
        useActiveCollection: optionalBoolean(args, "useActiveCollection"),
      });
  • The core implementation of the searchComments tool logic within the comment-knowledge-base service.
    async search(input: SearchCommentsInput): Promise<SearchCommentsOutput> {
      const startedAt = Date.now();
      const maxResults = Math.max(1, Math.min(input.maxResults ?? 10, 50));
      const minScore = Math.max(0, Math.min(input.minScore ?? 0.15, 1));
      const scope = this.resolveCollectionScope(input);
      const targetCollections = scope.searchedCollectionIds;
      const videoFilter = input.videoIdFilter
        ? new Set(input.videoIdFilter)
  • Registration of the "searchComments" tool definition in the MCP server.
      name: "searchComments",
      description: "Search imported comment collections with ranked results. Returns matching comments with author, like count, and relevance score. Uses active comment collection by default.",
      inputSchema: {
        type: "object",
        properties: {
          query: { type: "string", description: "Search query" },
          collectionId: { type: "string", description: "Specific collection to search" },
          maxResults: { type: "number", minimum: 1, maximum: 50 },
          minScore: { type: "number", minimum: 0, maximum: 1 },
          videoIdFilter: { type: "array", items: { type: "string" }, minItems: 1, maxItems: 100 },
          useActiveCollection: { type: "boolean" },
        },
        required: ["query"],
        additionalProperties: false,
      },
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions ranked results and default collection usage, but lacks critical details: whether this is a read-only operation, potential rate limits, authentication needs, or what happens if no collections exist. For a search tool with 6 parameters, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured: two sentences that efficiently convey purpose, output details, and default behavior. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (6 parameters, search functionality) and lack of both annotations and output schema, the description is incomplete. It covers basic purpose and default behavior but misses important contextual details: expected return format beyond listed fields, error conditions, performance characteristics, and how results are ranked. The absence of output schema increases the need for more completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 33% (2 of 6 parameters have descriptions), so the description must compensate. It adds some value by explaining the default behavior ('uses active comment collection by default') which relates to 'useActiveCollection' and 'collectionId' parameters. However, it doesn't clarify the meaning or usage of other parameters like 'minScore' or 'videoIdFilter,' leaving them largely undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching imported comment collections with ranked results. It specifies the verb ('search'), resource ('imported comment collections'), and key output details (author, like count, relevance score). However, it doesn't explicitly differentiate from sibling tools like 'searchTranscripts' or 'searchVisualContent' beyond mentioning 'comment collections'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context by mentioning 'uses active comment collection by default,' which implies when to use this vs. specifying a collectionId. However, it doesn't offer explicit guidance on when to choose this tool over alternatives like 'readComments' or 'listCommentCollections,' nor does it mention prerequisites (e.g., needing imported collections first).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rajanrengasamy/vidlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server