Skip to main content
Glama
bbernstein

LacyLights MCP Server

by bbernstein

create_cue_sequence

Generate a sequence of lighting cues by combining existing scenes, specifying transitions, and defining script context for the LacyLights system to enhance theatrical lighting design.

Instructions

Create a sequence of lighting cues from existing scenes

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYesProject ID to create cue sequence in
sceneIdsYesScene IDs to include in sequence
scriptContextYesScript context for the cue sequence
sequenceNameYesName for the cue sequence
transitionPreferencesNo

Implementation Reference

  • Main handler implementation for create_cue_sequence tool. Parses input, validates scenes, generates cue timing using AI service, creates cue list and individual cues in database, returns detailed results with statistics.
    async createCueSequence(args: z.infer<typeof CreateCueSequenceSchema>) {
      const {
        projectId,
        scriptContext,
        sceneIds,
        sequenceName,
        transitionPreferences,
      } = CreateCueSequenceSchema.parse(args);
    
      try {
        // Get project and validate scenes
        const project = await this.graphqlClient.getProject(projectId);
        if (!project) {
          throw new Error(`Project with ID ${projectId} not found`);
        }
    
        // Map scenes in the exact order of sceneIds to maintain consistency
        const scenes = sceneIds.map((sceneId) => {
          const scene = project.scenes.find((s) => s.id === sceneId);
          if (!scene) {
            throw new Error(`Scene with ID ${sceneId} not found in the project`);
          }
          return scene;
        });
    
        // Convert scenes to GeneratedScene format for AI processing
        const generatedScenes: GeneratedScene[] = scenes.map((scene) => ({
          name: scene.name,
          description: scene.description || "",
          fixtureValues: scene.fixtureValues.map((fv) => ({
            fixtureId: fv.fixture.id,
            channelValues: fv.channelValues, // Already a number array
          })),
          reasoning: `Existing scene: ${scene.name}`,
        }));
    
        // Generate cue sequence using AI
        const cueSequence = await this.aiLightingService.generateCueSequence(
          scriptContext,
          generatedScenes,
          transitionPreferences,
        );
    
        // Create the cue list in the database
        const cueList = await this.graphqlClient.createCueList({
          name: sequenceName,
          description: cueSequence.description,
          projectId,
        });
    
        // Create individual cues
        const createdCues = [];
        for (let i = 0; i < cueSequence.cues.length; i++) {
          const cueData = cueSequence.cues[i];
          // The AI returns sceneId as a string that might be an index or scene reference
          // Try to parse it as an index first
          let sceneId: string;
          const sceneIdAsNumber = parseInt(cueData.sceneId);
    
          if (
            !isNaN(sceneIdAsNumber) &&
            sceneIdAsNumber >= 0 &&
            sceneIdAsNumber < sceneIds.length
          ) {
            // It's a valid index, use it
            sceneId = sceneIds[sceneIdAsNumber];
          } else {
            // Try to find it in the sceneIds array
            const sceneIndex = sceneIds.findIndex((id) => id === cueData.sceneId);
            sceneId =
              sceneIndex >= 0
                ? sceneIds[sceneIndex]
                : sceneIds[Math.min(i, sceneIds.length - 1)]; // Fallback to corresponding index or last scene
          }
    
          const cue = await this.graphqlClient.createCue({
            name: cueData.name,
            cueNumber: cueData.cueNumber,
            cueListId: cueList.id,
            sceneId: sceneId,
            fadeInTime: cueData.fadeInTime,
            fadeOutTime: cueData.fadeOutTime,
            followTime: cueData.followTime,
            notes: cueData.notes,
          });
    
          createdCues.push(cue);
        }
    
        return {
          cueListId: cueList.id,
          cueList: {
            name: cueList.name,
            description: cueList.description,
            totalCues: createdCues.length,
          },
          cues: createdCues.map((cue) => ({
            id: cue.id,
            name: cue.name,
            cueNumber: cue.cueNumber,
            fadeInTime: cue.fadeInTime,
            fadeOutTime: cue.fadeOutTime,
            followTime: cue.followTime,
            notes: cue.notes,
            sceneName: cue.scene.name,
          })),
          sequenceReasoning: cueSequence.reasoning,
          statistics: {
            totalCues: createdCues.length,
            averageFadeTime:
              createdCues.reduce((sum, cue) => sum + cue.fadeInTime, 0) /
              createdCues.length,
            followCues: createdCues.filter((cue) => cue.followTime !== null)
              .length,
            estimatedDuration: this.estimateSequenceDuration(createdCues),
          },
        };
      } catch (error) {
        throw new Error(`Failed to create cue sequence: ${error}`);
      }
    }
  • Zod schema for validating input parameters to the create_cue_sequence tool.
    const CreateCueSequenceSchema = z.object({
      projectId: z.string(),
      scriptContext: z.string(),
      sceneIds: z.array(z.string()),
      sequenceName: z.string(),
      transitionPreferences: z
        .object({
          defaultFadeIn: z.number().default(3),
          defaultFadeOut: z.number().default(3),
          followCues: z.boolean().default(false),
          autoAdvance: z.boolean().default(false),
        })
        .optional(),
    });
  • src/index.ts:1182-1222 (registration)
    Tool registration in listTools handler: defines name, description, and inputSchema advertised to MCP clients.
      name: "create_cue_sequence",
      description:
        "Create a sequence of lighting cues from existing scenes",
      inputSchema: {
        type: "object",
        properties: {
          projectId: {
            type: "string",
            description: "Project ID to create cue sequence in",
          },
          scriptContext: {
            type: "string",
            description: "Script context for the cue sequence",
          },
          sceneIds: {
            type: "array",
            items: { type: "string" },
            description: "Scene IDs to include in sequence",
          },
          sequenceName: {
            type: "string",
            description: "Name for the cue sequence",
          },
          transitionPreferences: {
            type: "object",
            properties: {
              defaultFadeIn: { type: "number", default: 3 },
              defaultFadeOut: { type: "number", default: 3 },
              followCues: { type: "boolean", default: false },
              autoAdvance: { type: "boolean", default: false },
            },
          },
        },
        required: [
          "projectId",
          "scriptContext",
          "sceneIds",
          "sequenceName",
        ],
      },
    },
  • src/index.ts:2254-2266 (registration)
    Tool call handler dispatch: routes create_cue_sequence calls to cueTools.createCueSequence method.
    case "create_cue_sequence":
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(
              await this.cueTools.createCueSequence(args as any),
              null,
              2,
            ),
          },
        ],
      };
  • src/index.ts:57-61 (registration)
    Instantiation of CueTools class instance used for handling cue-related tools including create_cue_sequence.
    this.cueTools = new CueTools(
      this.graphqlClient,
      this.ragService,
      this.aiLightingService,
    );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool creates a sequence but doesn't cover critical aspects like permissions required, whether it's idempotent, error handling, or what the output looks like (no output schema). This is a significant gap for a creation tool with multiple parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, nested objects, no output schema, and no annotations), the description is inadequate. It lacks details on behavioral traits, output expectations, and usage context, leaving the agent with insufficient information to invoke the tool confidently in a real-world scenario.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 80%, providing a solid baseline. The description adds minimal value beyond the schema by implying 'sceneIds' are used to build the sequence, but it doesn't explain parameter interactions (e.g., how 'transitionPreferences' affect the sequence) or provide additional context like format examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('create') and resource ('sequence of lighting cues from existing scenes'), making the purpose evident. It distinguishes from siblings like 'generate_scene' or 'update_cue' by focusing on sequence creation from existing scenes, though it doesn't explicitly contrast with tools like 'generate_act_cues' or 'reorder_cues'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing existing scenes), exclusions (e.g., not for modifying sequences), or comparisons to siblings like 'generate_act_cues' or 'reorder_cues', leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bbernstein/lacylights-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server