Supports running ChromaDB via Docker for persistent vector storage and advanced RAG capabilities when analyzing theatrical scripts
Connects to a GraphQL endpoint to access the LacyLights backend for fixture and scene management
Uses OpenAI's API for AI-powered lighting generation, script analysis, and intelligent scene creation based on artistic intent and lighting design principles
LacyLights MCP Server
An MCP (Model Context Protocol) server that provides AI-powered theatrical lighting design capabilities for the LacyLights system.
Features
Fixture Management
get_fixture_inventory
- Query available lighting fixtures and their capabilitiesanalyze_fixture_capabilities
- Analyze specific fixtures for color mixing, positioning, effects, etc.
Scene Generation
generate_scene
- Generate lighting scenes based on script context and design preferencesanalyze_script
- Extract lighting-relevant information from theatrical scriptsoptimize_scene
- Optimize existing scenes for energy efficiency, dramatic impact, etc.
Cue Management
create_cue_sequence
- Create sequences of lighting cues from existing scenesgenerate_act_cues
- Generate complete cue suggestions for theatrical actsoptimize_cue_timing
- Optimize cue timing for smooth transitions or dramatic effectanalyze_cue_structure
- Analyze and recommend improvements to cue lists
Installation
- Install dependencies:
- Set up environment variables:
- Build the project:
ChromaDB Setup (Optional)
The MCP server currently uses an in-memory pattern storage system for simplicity. If you want to use ChromaDB for persistent vector storage and more advanced RAG capabilities:
Option 1: Docker (Recommended)
Option 2: Local Installation
Then update your .env
file:
Note: The current implementation works without ChromaDB using built-in lighting patterns. ChromaDB enhances the system with vector similarity search for more sophisticated pattern matching.
Configuration
Required Environment Variables
OPENAI_API_KEY
- OpenAI API key for AI-powered lighting generationLACYLIGHTS_GRAPHQL_ENDPOINT
- GraphQL endpoint for your lacylights-node backend (default: http://localhost:4000/graphql)
Running the Server
Make sure your lacylights-node
backend is running first, then:
You should see:
Optional Environment Variables
CHROMA_HOST
- ChromaDB host for RAG functionality (default: localhost)CHROMA_PORT
- ChromaDB port (default: 8000)
Usage
Running the Server
Integration with Claude
Add this server to your Claude configuration:
Important: If the above doesn't work, you may need to specify the exact path to your Node.js 14+ installation. You can find it with:
Note: Use the absolute path to run-mcp.js
in your configuration. This wrapper ensures proper CommonJS module loading.
Use the generate_scene tool to create lighting for:
- Scene: "Lady Macbeth's sleepwalking scene"
- Script Context: "A dark castle at night, Lady Macbeth enters carrying a candle, tormented by guilt"
- Mood: "mysterious"
- Color Palette: ["deep blue", "pale white", "cold"]
Use the analyze_script tool with the full text of Act 1 of Macbeth to:
- Extract all lighting cues
- Suggest scenes for each moment
- Identify key lighting moments
Use the create_cue_sequence tool to create a cue list for Act 1 using the scenes generated from script analysis.
src/ ├── tools/ # MCP tool implementations │ ├── fixture-tools.ts │ ├── scene-tools.ts │ └── cue-tools.ts ├── services/ # Core services │ ├── graphql-client.ts │ ├── rag-service.ts │ └── ai-lighting.ts ├── types/ # TypeScript type definitions │ └── lighting.ts └── index.ts # MCP server entry point
Integration with LacyLights
This MCP server is designed to work with the existing LacyLights system:
- lacylights-node - Provides GraphQL API for fixture and scene management
- lacylights-fe - Frontend for manual lighting control and visualization
The MCP server acts as an AI layer that enhances the existing system with intelligent automation and design assistance.
Troubleshooting
Common Issues
- Module import errors
- The server uses ES modules with cross-fetch for GraphQL requests
- Ensure Node.js version is 18+ as specified in package.json
- GraphQL connection errors
- Verify your
lacylights-node
backend is running on port 4000 - Check the
LACYLIGHTS_GRAPHQL_ENDPOINT
environment variable
- Verify your
- OpenAI API errors
- Ensure your
OPENAI_API_KEY
is set in the.env
file - Verify the API key has access to GPT-4
- Ensure your
- MCP connection errors in Claude
- Make sure to use the
run-mcp.js
wrapper script, notdist/index.js
directly - Use the full absolute path in your Claude configuration
- Restart Claude after updating the MCP configuration
- Make sure to use the
- "Unexpected token ?" error
- This means Claude is using an old Node.js version (< 14)
- Update your config to use the full path to your Node.js installation
- On macOS with Homebrew:
"command": "/opt/homebrew/bin/node"
- On other systems, find your node path with:
which node
Dependencies
The simplified implementation uses:
- Direct fetch requests instead of Apollo Client for better ESM compatibility
- In-memory pattern storage instead of ChromaDB (can be upgraded later)
- Cross-fetch polyfill for Node.js fetch support
License
MIT
Provides AI-powered theatrical lighting design capabilities for the LacyLights system, allowing users to generate lighting scenes, analyze scripts, manage cues, and optimize lighting effects based on artistic intent.
Related MCP Servers
- AsecurityAlicenseAqualityProvides LLM Agents with AI-powered mentorship for code review, design critique, writing feedback, and brainstorming using the Deepseek API, enabling enhanced output in various development and strategic planning tasks.Last updated -515TypeScriptApache 2.0
- -securityAlicense-qualityEnables AI models to interact with the Lightning Network by providing an MCP-compliant API to pay invoices.Last updated -13TypeScriptMIT License
- -securityFlicense-qualityEnables interaction with lightning addresses and common lightning tools via your LLM, providing Lightning Network functionality through natural language.Last updated -101TypeScript
- -securityFlicense-qualityProvides tools to interact with RunwayML and Luma AI APIs for video and image generation, including text-to-video, image-to-video, prompt enhancement, and management of generations.Last updated -1TypeScript