Skip to main content
Glama
soriat

MCP Elicitations Demo Server

by soriat

annotatedMessage

Demonstrate how annotations add metadata to content, supporting error, success, or debug message types with optional image inclusion, in the MCP elicitation demo server.

Instructions

Demonstrates how annotations can be used to provide metadata about content

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
includeImageNoWhether to include an example image
messageTypeYesType of message to demonstrate different annotation patterns

Implementation Reference

  • The handler function executes the tool logic: parses input arguments using the schema, constructs annotated text and optional image content based on messageType (error, success, debug), and returns structured content with priority and audience annotations.
    handler: async (args: any) => {
      const { messageType, includeImage } = AnnotatedMessageSchema.parse(args);
    
      const content: any[] = [];
    
      // Main message with different priorities/audiences based on type
      if (messageType === "error") {
        content.push({
          type: "text" as const,
          text: "Error: Operation failed",
          annotations: {
            priority: 1.0, // Errors are highest priority
            audience: ["user", "assistant"], // Both need to know about errors
          },
        });
      } else if (messageType === "success") {
        content.push({
          type: "text" as const,
          text: "Operation completed successfully",
          annotations: {
            priority: 0.7, // Success messages are important but not critical
            audience: ["user"], // Success mainly for user consumption
          },
        });
      } else if (messageType === "debug") {
        content.push({
          type: "text" as const,
          text: "Debug: Cache hit ratio 0.95, latency 150ms",
          annotations: {
            priority: 0.3, // Debug info is low priority
            audience: ["assistant"], // Technical details for assistant
          },
        });
      }
    
      // Optional image with its own annotations
      if (includeImage) {
        content.push({
          type: "image" as const,
          data: MCP_TINY_IMAGE,
          mimeType: "image/png",
          annotations: {
            priority: 0.5,
            audience: ["user"], // Images primarily for user visualization
          },
        });
      }
    
      return { content };
    },
  • Zod schema defining the input parameters: messageType (enum: error, success, debug) and optional includeImage (boolean, default false). Converted to JSON schema for the tool inputSchema.
    const AnnotatedMessageSchema = z.object({
      messageType: z
        .enum(["error", "success", "debug"])
        .describe("Type of message to demonstrate different annotation patterns"),
      includeImage: z
        .boolean()
        .default(false)
        .describe("Whether to include an example image"),
    });
  • The annotatedMessageTool is imported (line 13) and included in the allTools array, which is used by getTools() to list tool specifications and getToolHandler() to dispatch calls to the correct handler during MCP server setup.
    const allTools = [
      echoTool,
      addTool,
      longRunningOperationTool,
      printEnvTool,
      sampleLlmTool,
      sampleWithPreferencesTool,
      sampleMultimodalTool,
      sampleConversationTool,
      sampleAdvancedTool,
      getTinyImageTool,
      annotatedMessageTool,
      getResourceReferenceTool,
      elicitationTool,
      getResourceLinksTool,
    ];
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description only states it 'demonstrates how annotations can be used', which doesn't reveal whether this is a read-only operation, if it has side effects, what it returns, or any performance characteristics. For a tool with no annotation coverage, this leaves critical behavioral traits unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, straightforward sentence that efficiently conveys its core idea without unnecessary words. It's appropriately sized for a demonstration tool, though it could be more front-loaded with actionable information about what the tool actually produces or does.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., a sample annotated message, a demonstration output), how the parameters affect the demonstration, or any educational context. For a tool with 2 parameters and no structured output documentation, more detail is needed to make it fully usable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters ('includeImage' and 'messageType'), including an enum for messageType. The description adds no parameter-specific information beyond what the schema already provides. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'demonstrates how annotations can be used to provide metadata about content', which gives a vague purpose but doesn't specify what the tool actually does operationally. It mentions 'annotations' and 'metadata' but lacks a clear verb+resource combination. Compared to siblings like 'echo', 'add', or 'getResourceLinks', it doesn't clearly differentiate its specific function beyond being a demonstration tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any specific contexts, prerequisites, or exclusions. Given siblings like 'echo' (for echoing input) or 'sampleLLM' (for LLM sampling), there's no indication of when this demonstration tool would be preferred over others for testing or educational purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/soriat/soria-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server