π¨ Fal.ai MCP Server
A Model Context Protocol (MCP) server that enables Claude Desktop (and other MCP clients) to generate images, videos, music, and audio using Fal.ai models.
β¨ Features
π Performance
Native Async API - Uses fal_client.run_async() for optimal performance
Queue Support - Long-running tasks (video/music) use queue API with progress updates
Non-blocking - All operations are truly asynchronous
π Transport Modes (New!)
STDIO - Traditional Model Context Protocol communication
HTTP/SSE - Web-based access via Server-Sent Events
Dual Mode - Run both transports simultaneously
π¨ Media Generation
πΌοΈ Image Generation - Create images using Flux, SDXL, and other models
π¬ Video Generation - Generate videos from images or text prompts
π΅ Music Generation - Create music from text descriptions
π£οΈ Text-to-Speech - Convert text to natural speech
π Audio Transcription - Transcribe audio using Whisper
β¬οΈ Image Upscaling - Enhance image resolution
π Image-to-Image - Transform existing images with prompts
Related MCP server: FL Studio MCP
π Quick Start
Prerequisites
Python 3.10 or higher
Fal.ai API key (free tier available)
Claude Desktop (or any MCP-compatible client)
Installation
Option 1: Docker (Recommended for Production) π³
Official Docker image available on GitHub Container Registry:
Or use Docker Compose:
Option 2: Install from PyPI
Or with uv:
Option 3: Install from source
Configuration
Get your Fal.ai API key from fal.ai
Configure Claude Desktop by adding to:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.json
For Docker Installation:
For PyPI Installation:
For Source Installation:
Restart Claude Desktop
π¬ Usage
With Claude Desktop
Once configured, ask Claude to:
"Generate an image of a sunset"
"Create a video from this image"
"Generate 30 seconds of ambient music"
"Convert this text to speech"
"Transcribe this audio file"
HTTP/SSE Transport (New!)
Run the server with HTTP transport for web-based access:
Connect from web clients via Server-Sent Events:
SSE endpoint:
http://localhost:8080/sse(Docker) orhttp://localhost:8000/sse(pip)Message endpoint:
POST http://localhost:8080/messages/
See Docker Documentation and HTTP Transport Documentation for details.
π¦ Supported Models
Image Models
flux_schnell- Fast high-quality generationflux_dev- Development version with more controlsdxl- Stable Diffusion XL
Video Models
svd- Stable Video Diffusionanimatediff- Text-to-video animation
Audio Models
musicgen- Music generationbark- Text-to-speechwhisper- Audio transcription
π€ Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
Local Development
We support local CI testing with act:
π License
MIT License - see LICENSE file for details.