Skip to main content
Glama
soriat

MCP Elicitations Demo Server

by soriat

startElicitation

Collect user preferences dynamically by prompting for favorite color, number, and pets using the MCP Elicitations Demo Server’s elicitation feature.

Instructions

Demonstrates the Elicitation feature by asking the user to provide information about their favorite color, number, and pets.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler function for the startElicitation tool. It triggers an elicitation request for user preferences (color, number, pets), handles the response based on user action (accept/decline/cancel), formats content accordingly, and includes raw result for debugging.
    handler: async (args: any, request: any, server: Server) => {
      ElicitationSchema.parse(args);
    
      const elicitationResult = await requestElicitation(
        'What are your favorite things?',
        {
          type: 'object',
          properties: {
            color: { type: 'string', description: 'Favorite color' },
            number: { type: 'integer', description: 'Favorite number', minimum: 1, maximum: 100 },
            pets: {
              type: 'string',
              enum: ['cats', 'dogs', 'birds', 'fish', 'reptiles'],
              description: 'Favorite pets'
            },
          }
        },
        server
      );
    
      // Handle different response actions
      const content: any[] = [];
    
      if (elicitationResult.action === 'accept' && elicitationResult.content) {
        content.push({
          type: "text" as const,
          text: `✅ User provided their favorite things!`,
        });
    
        // Only access elicitationResult.content when action is accept
        const { color, number, pets } = elicitationResult.content;
        content.push({
          type: "text" as const,
          text: `Their favorites are:\n- Color: ${color || 'not specified'}\n- Number: ${number || 'not specified'}\n- Pets: ${pets || 'not specified'}`,
        });
      } else if (elicitationResult.action === 'decline') {
        content.push({
          type: "text" as const,
          text: `❌ User declined to provide their favorite things.`,
        });
      } else if (elicitationResult.action === 'cancel') {
        content.push({
          type: "text" as const,
          text: `⚠️ User cancelled the elicitation dialog.`,
        });
      }
    
      // Include raw result for debugging
      content.push({
        type: "text" as const,
        text: `\nRaw result: ${JSON.stringify(elicitationResult, null, 2)}`,
      });
    
      return { content };
    },
  • Schema definition using Zod (empty object since no input arguments required) and its conversion to JSON schema for the tool's input validation.
    const ElicitationSchema = z.object({});
    
    export const elicitationTool = {
      name: "startElicitation",
      description: "Demonstrates the Elicitation feature by asking the user to provide information about their favorite color, number, and pets.",
      inputSchema: zodToJsonSchema(ElicitationSchema),
  • The elicitationTool is registered by inclusion in the allTools array, which is used to provide tool metadata via getTools() and dispatch handlers via getToolHandler().
    elicitationTool,
  • Supporting utility function that sends an 'elicitation/create' request to the server to initiate user elicitation, used within the tool handler.
    export const requestElicitation = async (
      message: string,
      requestedSchema: any,
      server: Server
    ) => {
      const request = {
        method: 'elicitation/create',
        params: {
          message,
          requestedSchema
        }
      };
    
      return await server.request(request, z.any());
    };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool asks for user input (color, number, pets), implying an interactive or input-gathering behavior, but doesn't describe how this elicitation works (e.g., prompts, format, response handling), whether it's read-only or mutative, or any side effects. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence: 'Demonstrates the Elicitation feature by asking the user to provide information about their favorite color, number, and pets.' It's front-loaded with the main purpose and includes necessary details without waste. Every word earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, no annotations, and no output schema, the description provides basic context about what the tool does (eliciting user info). However, it lacks details on behavior, output, or integration with sibling tools, making it incomplete for full understanding. It's adequate as a minimal description but has clear gaps in explaining how the tool operates or what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description doesn't add parameter details since there are none to explain, which is appropriate. Baseline is 4 for 0 parameters, as the description doesn't need to compensate for missing schema info and aligns with the schema's emptiness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Demonstrates the Elicitation feature by asking the user to provide information about their favorite color, number, and pets.' It specifies the verb ('demonstrates') and the resource/feature ('Elicitation feature'), and explains what the tool does (asks for specific user information). However, it doesn't explicitly differentiate from sibling tools like 'annotatedMessage' or 'sampleLLM' that might also involve user interaction, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions it 'demonstrates the Elicitation feature,' which implies a demo or testing context, but doesn't specify when to choose it over other tools like 'annotatedMessage' for user input or 'sampleLLM' for AI interaction. There's no mention of prerequisites, exclusions, or explicit alternatives, leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/soriat/soria-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server