Skip to main content
Glama
williamzujkowski

Strudel MCP Server

validate_pattern_runtime

Validate Strudel music patterns by checking for runtime errors during execution, ensuring code runs correctly before live performance.

Instructions

Validate pattern with runtime error checking (monitors Strudel console for errors)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
patternYesPattern code to validate
waitMsNoHow long to wait for errors (default 500ms)

Implementation Reference

  • Main handler function that executes the runtime pattern validation by writing the pattern to the editor, briefly playing it to trigger evaluation, capturing console errors and warnings from Strudel.
    async validatePatternRuntime(pattern: string, waitMs: number = 500): Promise<{
      valid: boolean;
      errors: string[];
      warnings: string[];
    }> {
      if (!this._page) {
        throw new Error('Browser not initialized. Run init tool first.');
      }
    
      // Clear previous errors
      this.clearConsoleMessages();
    
      // Write pattern
      await this.writePattern(pattern);
    
      // Brief play to trigger evaluation - Strudel uses lazy evaluation
      // so errors only appear when the pattern is actually executed
      try {
        await this._page.keyboard.press('ControlOrMeta+Enter');
        await this._page.waitForTimeout(Math.min(waitMs, 300));
        await this._page.keyboard.press('ControlOrMeta+Period');
      } catch (e) {
        this.logger.warn('Failed to trigger pattern evaluation', e);
      }
    
      // Wait for potential errors to appear
      await this._page.waitForTimeout(waitMs);
    
      const errors = this.getConsoleErrors();
      const warnings = this.getConsoleWarnings();
    
      return {
        valid: errors.length === 0,
        errors,
        warnings
      };
    }
  • Tool registration in the getTools() array, including name, description, and input schema definition.
      name: 'validate_pattern_runtime',
      description: 'Validate pattern with runtime error checking (monitors Strudel console for errors)',
      inputSchema: {
        type: 'object',
        properties: {
          pattern: { type: 'string', description: 'Pattern code to validate' },
          waitMs: { type: 'number', description: 'How long to wait for errors (default 500ms)' }
        },
        required: ['pattern']
      }
    },
  • Tool dispatch handler in the executeTool switch statement that calls the controller's validatePatternRuntime method and formats the response.
    case 'validate_pattern_runtime':
      if (!this.isInitialized) {
        return 'Browser not initialized. Run init first.';
      }
      InputValidator.validateStringLength(args.pattern, 'pattern', 10000, false);
      const validation = await this.controller.validatePatternRuntime(
        args.pattern,
        args.waitMs || 500
      );
    
      if (validation.valid) {
        return `✅ Pattern valid - no runtime errors detected`;
      } else {
        return `❌ Pattern has runtime errors:\n${validation.errors.join('\n')}\n` +
               (validation.warnings.length > 0 ? `\nWarnings:\n${validation.warnings.join('\n')}` : '');
      }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'runtime error checking' and 'monitors Strudel console for errors', which gives some insight into the tool's behavior, but it lacks details on permissions, rate limits, error handling, or what happens during validation (e.g., does it modify the pattern?). For a tool with no annotations, this is insufficient to fully understand its operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise and front-loaded, consisting of a single sentence that directly states the tool's function. Every word earns its place by specifying the action, resource, and mechanism without unnecessary details. It's efficiently structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a validation tool with runtime checking, no annotations, and no output schema, the description is minimally adequate. It covers the basic purpose but lacks details on behavioral aspects, usage context, and return values. While it's complete enough to convey the core function, it doesn't provide the full context needed for optimal tool selection and invocation in a rich sibling tool environment.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear documentation for both parameters ('pattern' and 'waitMs'). The description doesn't add any extra meaning beyond what the schema provides, such as explaining the format of 'pattern' code or the implications of 'waitMs'. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Validate pattern with runtime error checking (monitors Strudel console for errors)'. It specifies the verb 'validate' and the resource 'pattern', and mentions the specific mechanism 'runtime error checking' and target 'Strudel console'. However, it doesn't explicitly differentiate from sibling tools like 'show_errors' or 'diagnostics', which might have overlapping functions, so it doesn't reach a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, exclusions, or comparisons to sibling tools such as 'show_errors' or 'diagnostics', which could be related. Without such context, users might struggle to choose the right tool for error checking in the Strudel environment.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/williamzujkowski/strudel-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server