Skip to main content
Glama
soriat

MCP Elicitations Demo Server

by soriat

longRunningOperation

Simulates and tracks long-running operations with progress updates by defining duration and steps, aiding in testing and monitoring dynamic processes on MCP Elicitations Demo Server.

Instructions

Demonstrates a long running operation with progress updates

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
durationNoDuration of the operation in seconds
stepsNoNumber of steps in the operation

Implementation Reference

  • The handler function that executes the long-running operation logic, parsing inputs, simulating steps with delays, sending progress notifications if available, and returning a completion message.
    handler: async (args: any, request: any, server: Server) => {
      const validatedArgs = LongRunningOperationSchema.parse(args);
      const { duration, steps } = validatedArgs;
      const stepDuration = duration / steps;
      const progressToken = request.params._meta?.progressToken;
    
      for (let i = 1; i < steps + 1; i++) {
        await new Promise((resolve) =>
          setTimeout(resolve, stepDuration * 1000)
        );
    
        if (progressToken !== undefined) {
          await server.notification({
            method: "notifications/progress",
            params: {
              progress: i,
              total: steps,
              progressToken,
            },
          });
        }
      }
    
      return {
        content: [
          {
            type: "text" as const,
            text: `Long running operation completed. Duration: ${duration} seconds, Steps: ${steps}.`,
          },
        ],
      };
    },
  • Zod schema for validating tool inputs: duration (default 10 seconds) and steps (default 5). Used in handler and converted to JSON schema for inputSchema.
    const LongRunningOperationSchema = z.object({
      duration: z
        .number()
        .default(10)
        .describe("Duration of the operation in seconds"),
      steps: z.number().default(5).describe("Number of steps in the operation"),
    });
  • Registration of the longRunningOperationTool in the allTools array, which is used by getTools() to list tools and getToolHandler() to find the handler for execution.
    const allTools = [
      echoTool,
      addTool,
      longRunningOperationTool,
      printEnvTool,
      sampleLlmTool,
      sampleWithPreferencesTool,
      sampleMultimodalTool,
      sampleConversationTool,
      sampleAdvancedTool,
      getTinyImageTool,
      annotatedMessageTool,
      getResourceReferenceTool,
      elicitationTool,
      getResourceLinksTool,
    ];
  • Setup function that registers the general request handlers for listing tools (ListToolsRequestSchema) and calling tools (CallToolRequestSchema) on the MCP server, enabling execution of longRunningOperationTool.
    export const setupTools = (server: Server) => {
      // Handle listing all available tools
      server.setRequestHandler(ListToolsRequestSchema, async () => {
        return { tools: getTools() };
      });
    
      // Handle tool execution
      server.setRequestHandler(CallToolRequestSchema, async (request) => {
        const { name, arguments: args } = request.params;
        const handler = getToolHandler(name);
    
        if (handler) {
          return await handler(args, request, server);
        }
    
        throw new Error(`Unknown tool: ${name}`);
      });
    };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'long running operation' and 'progress updates', which hints at asynchronous behavior and status reporting, but doesn't specify timeout expectations, cancellation support, error handling, or what 'progress updates' actually entail. This leaves significant behavioral gaps for a tool explicitly about long-running operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose. There's no wasted language or unnecessary elaboration. It's appropriately sized for what it communicates.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool about long-running operations with no annotations and no output schema, the description is insufficient. It doesn't explain what the operation actually does, what progress updates look like, how results are returned, or any error conditions. The agent lacks critical context needed to properly use this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters (duration and steps). The description adds no additional parameter semantics beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'demonstrates a long running operation with progress updates', which provides a general purpose but lacks specificity about what resource or domain it operates on. It distinguishes from siblings by mentioning 'long running' and 'progress updates', but doesn't specify what exactly is being operated on or demonstrated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, constraints, or comparison to sibling tools. The agent must infer usage purely from the name and description without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/soriat/soria-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server