Skip to main content
Glama
mako10k

MCP-Confirm

by mako10k

collect_rating

Collect user satisfaction ratings for AI responses to measure help quality and improve service.

Instructions

Collect user satisfaction rating for AI's response or help quality

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
subjectYesWhat to rate (e.g., 'this response', 'my help with your task')
descriptionNoAdditional context for the rating request

Implementation Reference

  • The handler function that executes the 'collect_rating' tool. It constructs an elicitation request for a numeric rating (1-10) and optional comment based on the provided subject and description, sends it via sendElicitationRequest, and returns a formatted text response with the user's input or action.
    private async handleCollectRating(args: Record<string, unknown>) {
      const subject =
        typeof args.subject === "string" ? args.subject : "this item";
      const description =
        typeof args.description === "string" ? args.description : undefined;
    
      const elicitationParams: ElicitationParams = {
        message: `Please rate ${subject}`,
        requestedSchema: {
          type: "object",
          properties: {
            rating: {
              type: "number",
              title: "Rating (1-10 scale)",
              description:
                description ||
                `Rate ${subject} from 1 to 10 (10 is the highest/best)`,
              minimum: 1,
              maximum: 10,
            },
            comment: {
              type: "string",
              title: "Comment",
              description: "Optional comment about your rating",
            },
          },
          required: ["rating"],
        },
        // Use default timeout instead of hardcoded short timeout
        timeoutMs: this.config.defaultTimeoutMs,
      };
    
      try {
        const response = await this.sendElicitationRequest(elicitationParams);
    
        if (response.action === "accept" && response.content) {
          const rating = response.content.rating as number;
          const comment = response.content.comment as string | undefined;
          return {
            content: [
              {
                type: "text",
                text: `User rating for ${subject}: ${rating}/10${comment ? `\nComment: ${comment}` : ""}`,
              },
            ],
          };
        } else {
          return {
            content: [
              {
                type: "text",
                text: `User ${response.action}ed the rating request.`,
              },
            ],
          };
        }
      } catch (error) {
        return this.createErrorResponse(
          `Rating collection failed: ${error instanceof Error ? error.message : String(error)}`
        );
      }
    }
  • Defines the Tool metadata including name 'collect_rating', description, and inputSchema specifying required 'subject' string and optional 'description' string.
    private createCollectRatingTool(): Tool {
      return {
        name: "collect_rating",
        description:
          "Collect user satisfaction rating for AI's response or help quality",
        inputSchema: {
          type: "object",
          properties: {
            subject: {
              type: "string",
              description:
                "What to rate (e.g., 'this response', 'my help with your task')",
            },
            description: {
              type: "string",
              description: "Additional context for the rating request",
            },
          },
          required: ["subject"],
        },
      };
    }
  • src/index.ts:231-242 (registration)
    Registers the 'collect_rating' tool in the list of available tools by calling createCollectRatingTool() and including it in the array returned for the listTools request handler.
    private getToolDefinitions(): Tool[] {
      return [
        this.createAskYesNoTool(),
        this.createConfirmActionTool(),
        this.createClarifyIntentTool(),
        this.createVerifyUnderstandingTool(),
        this.createCollectRatingTool(),
        this.createElicitCustomTool(),
        this.createSearchLogsTool(),
        this.createAnalyzeLogsTool(),
      ];
    }
  • src/index.ts:516-537 (registration)
    The tool execution dispatcher switch statement that routes calls to 'collect_rating' to the handleCollectRating handler function.
    private async executeToolCall(name: string, args: Record<string, unknown>) {
      switch (name) {
        case "ask_yes_no":
          return await this.handleAskYesNo(args);
        case "confirm_action":
          return await this.handleConfirmAction(args);
        case "clarify_intent":
          return await this.handleClarifyIntent(args);
        case "verify_understanding":
          return await this.handleVerifyUnderstanding(args);
        case "collect_rating":
          return await this.handleCollectRating(args);
        case "elicit_custom":
          return await this.handleElicitCustom(args);
        case "search_logs":
          return await this.handleSearchLogs(args);
        case "analyze_logs":
          return await this.handleAnalyzeLogs(args);
        default:
          throw new Error(`Unknown tool: ${name}`);
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool collects ratings but doesn't explain how (e.g., scale, format), whether it's interactive or passive, what happens to the data, or any limitations (e.g., rate limits). This leaves key behavioral traits unspecified for a tool that likely involves user interaction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and appropriately sized, making it easy to parse quickly while conveying the essential action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (collecting user feedback, which may involve interaction and data handling), lack of annotations, and no output schema, the description is incomplete. It doesn't cover behavioral aspects like response format, error handling, or user experience implications, leaving gaps that could hinder effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting both parameters. The description doesn't add any meaning beyond the schema, such as examples of typical 'subject' values or how 'description' enhances the rating context. With high schema coverage, the baseline score of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('collect') and resource ('user satisfaction rating'), specifying it's for AI's response or help quality. However, it doesn't explicitly distinguish this from sibling tools like 'ask_yes_no' or 'elicit_custom', which might also involve user feedback collection, leaving some ambiguity about its unique role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing (e.g., after an interaction), or differentiate from siblings like 'ask_yes_no' for binary feedback or 'elicit_custom' for open-ended input, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mako10k/mcp-confirm'

If you have feedback or need assistance with the MCP directory API, please join our Discord server