Fal.ai MCP Server
The Fal.ai MCP Server enables Claude Desktop and other MCP clients to generate and edit media using 600+ Fal.ai AI models through a unified interface.
Core Capabilities:
Image Generation: Create images from text prompts using models like Flux (Schnell, Dev, Pro), SDXL, and Stable Diffusion v3. Control composition, style, aspect ratios (square, landscape 4:3/16:9, portrait 3:4/9:16), seed values for reproducibility, and generate multiple variations (up to 4 images). Supports negative prompts and image-to-image style transfer.
Image Editing: Remove backgrounds, upscale resolution (2x or 4x), edit with natural language, inpaint specific regions with masks, smart resize for social media, and overlay images (e.g., watermarks).
Video Generation: Create videos from text or animate existing images using SVD, AnimateDiff, and Kling models. Control duration from 2 to 10 seconds with async processing and progress updates.
Audio Generation: Generate instrumental music or songs with vocals from text descriptions using MusicGen (medium/large variants). Customize duration from 5 to 300 seconds with queue-based processing. Includes text-to-speech and audio transcription (Bark, Whisper).
Model Discovery & Management: Access 600+ models via smart filtering, search by category (image, video, audio), get AI-powered recommendations, and use the
list_modelstool or specify full model IDs.Utilities: Check pricing and usage statistics, view spending history, upload local files for processing, and leverage asynchronous, non-blocking architecture with queue support for long-running tasks.
Flexible Deployment: Run via STDIO, HTTP/SSE, or dual transport modes using uvx, Docker, PyPI, or from source.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Fal.ai MCP Servergenerate an image of a futuristic city at sunset with flying cars"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
🎨 Fal.ai MCP Server
A Model Context Protocol (MCP) server that enables Claude Desktop (and other MCP clients) to generate images, videos, music, and audio using Fal.ai models.
✨ Features
🚀 Performance
Native Async API - Uses fal_client.run_async() for optimal performance
Queue Support - Long-running tasks (video/music) use queue API with progress updates
Non-blocking - All operations are truly asynchronous
🌐 Transport Modes (New!)
STDIO - Traditional Model Context Protocol communication
HTTP/SSE - Web-based access via Server-Sent Events
Dual Mode - Run both transports simultaneously
🎨 Media Generation (18 Tools)
Image Generation:
🖼️ generate_image - Create images from text prompts (Flux, SDXL, etc.)
🎯 generate_image_structured - Fine-grained control over composition, lighting, subjects
🔄 generate_image_from_image - Transform existing images with style transfer
Image Editing:
✂️ remove_background - Remove backgrounds from images (transparent PNG)
🔍 upscale_image - Upscale images 2x or 4x while preserving quality
✏️ edit_image - Edit images using natural language instructions
🎭 inpaint_image - Edit specific regions using masks
📐 resize_image - Smart resize for social media (Instagram, YouTube, TikTok, etc.)
🏷️ compose_images - Overlay images (watermarks, logos) with precise positioning
Video Tools:
🎬 generate_video - Text-to-video and image-to-video generation
📹 generate_video_from_image - Animate images into videos
🔀 generate_video_from_video - Video restyling and motion transfer
Audio Tools:
🎵 generate_music - Create instrumental music or songs with vocals
Utility Tools:
🔍 list_models - Discover 600+ available models with smart filtering
💡 recommend_model - AI-powered model recommendations for your task
💰 get_pricing - Check costs before generating content
📊 get_usage - View spending history and usage stats
⬆️ upload_file - Upload local files for use with generation tools
🔍 Dynamic Model Discovery (New!)
600+ Models - Access all models available on Fal.ai platform
Auto-Discovery - Models are fetched dynamically from the Fal.ai API
Smart Caching - TTL-based cache for optimal performance
Flexible Input - Use full model IDs or friendly aliases
Related MCP server: FL Studio MCP
🚀 Quick Start
Prerequisites
Python 3.10 or higher
Fal.ai API key (free tier available)
Claude Desktop (or any MCP-compatible client)
Installation
Option 0: Claude Code Plugin (Simplest for Claude Code Users) 🔌
If you're using Claude Code, install directly via the plugin system:
# Add the Luminary Lane Tools marketplace
/plugin marketplace add raveenb/fal-mcp-server
# Install the fal-ai plugin
/plugin install fal-ai@luminary-lane-toolsOr install directly without adding the marketplace:
/plugin install fal-ai@raveenb/fal-mcp-serverNote: You'll need to set
FAL_KEYin your environment before using the plugin.
Option 1: uvx (Recommended - Zero Install) ⚡
Run directly without installation using uv:
# Run the MCP server directly
uvx --from fal-mcp-server fal-mcp
# Or with specific version
uvx --from fal-mcp-server==1.4.0 fal-mcpClaude Desktop Configuration for uvx:
{
"mcpServers": {
"fal-ai": {
"command": "uvx",
"args": ["--from", "fal-mcp-server", "fal-mcp"],
"env": {
"FAL_KEY": "your-fal-api-key"
}
}
}
}Note: Install uv first:
curl -LsSf https://astral.sh/uv/install.sh | sh
Option 2: Docker (Recommended for Production) 🐳
Official Docker image available on GitHub Container Registry.
Step 1: Start the Docker container
# Pull and run with your API key
docker run -d \
--name fal-mcp \
-e FAL_KEY=your-api-key \
-p 8080:8080 \
ghcr.io/raveenb/fal-mcp-server:latest
# Verify it's running
docker logs fal-mcpStep 2: Configure Claude Desktop to connect
Add to your Claude Desktop config file:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"fal-ai": {
"command": "npx",
"args": ["mcp-remote", "http://localhost:8080/sse"]
}
}
}Note: This uses mcp-remote to connect to the HTTP/SSE endpoint. Alternatively, if you have
curlavailable:"command": "curl", "args": ["-N", "http://localhost:8080/sse"]
Step 3: Restart Claude Desktop
The fal-ai tools should now be available.
Docker Environment Variables:
Variable | Default | Description |
| (required) | Your Fal.ai API key |
|
| Transport mode: |
|
| Host to bind the server to |
|
| Port for the HTTP server |
Using Docker Compose:
curl -O https://raw.githubusercontent.com/raveenb/fal-mcp-server/main/docker-compose.yml
echo "FAL_KEY=your-api-key" > .env
docker-compose up -d⚠️ File Upload with Docker:
The upload_file tool requires volume mounts to access host files:
docker run -d -p 8080:8080 \
-e FAL_KEY="${FAL_KEY}" \
-e FAL_MCP_TRANSPORT=http \
-v ${HOME}/Downloads:/downloads:ro \
-v ${HOME}/Pictures:/pictures:ro \
ghcr.io/raveenb/fal-mcp-server:latestThen use container paths like /downloads/image.png instead of host paths.
Feature | stdio (uvx) | Docker (HTTP/SSE) |
| ✅ Full filesystem | ⚠️ Needs volume mounts |
Security | Runs as user | Sandboxed container |
Option 3: Install from PyPI
pip install fal-mcp-serverOr with uv:
uv pip install fal-mcp-serverOption 4: Install from source
git clone https://github.com/raveenb/fal-mcp-server.git
cd fal-mcp-server
pip install -e .Configuration
Get your Fal.ai API key from fal.ai
Configure Claude Desktop by adding to:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.json
For PyPI/pip Installation:
{
"mcpServers": {
"fal-ai": {
"command": "fal-mcp",
"env": {
"FAL_KEY": "your-fal-api-key"
}
}
}
}Note: For Docker configuration, see Option 2: Docker above.
For Source Installation:
{
"mcpServers": {
"fal-ai": {
"command": "python",
"args": ["/path/to/fal-mcp-server/src/fal_mcp_server/server.py"],
"env": {
"FAL_KEY": "your-fal-api-key"
}
}
}
}Restart Claude Desktop
💬 Usage
With Claude Desktop
Once configured, ask Claude to:
"Generate an image of a sunset"
"Create a video from this image"
"Generate 30 seconds of ambient music"
"Convert this text to speech"
"Transcribe this audio file"
Discovering Available Models
Use the list_models tool to discover available models:
"What image models are available?"
"List video generation models"
"Search for flux models"
Using Any Fal.ai Model
You can use any model from the Fal.ai platform:
# Using a friendly alias (backward compatible)
"Generate an image with flux_schnell"
# Using a full model ID (new capability)
"Generate an image using fal-ai/flux-pro/v1.1-ultra"
"Create a video with fal-ai/kling-video/v1.5/pro"HTTP/SSE Transport (New!)
Run the server with HTTP transport for web-based access:
# Using Docker (recommended)
docker run -d -e FAL_KEY=your-key -p 8080:8080 ghcr.io/raveenb/fal-mcp-server:latest
# Using pip installation
fal-mcp-http --host 0.0.0.0 --port 8000
# Or dual mode (STDIO + HTTP)
fal-mcp-dual --transport dual --port 8000Connect from web clients via Server-Sent Events:
SSE endpoint:
http://localhost:8080/sse(Docker) orhttp://localhost:8000/sse(pip)Message endpoint:
POST http://localhost:8080/messages/
See Docker Documentation and HTTP Transport Documentation for details.
📦 Supported Models
This server supports 600+ models from the Fal.ai platform through dynamic discovery. Use the list_models tool to explore available models, or use any model ID directly.
Popular Aliases (Quick Reference)
These friendly aliases are always available for commonly used models:
Alias | Model ID | Type |
|
| Image |
|
| Image |
|
| Image |
|
| Image |
|
| Image |
|
| Video |
|
| Video |
|
| Video |
|
| Audio |
|
| Audio |
|
| Audio |
|
| Audio |
Using Full Model IDs
You can also use any model directly by its full ID:
# Examples of full model IDs
"fal-ai/flux-pro/v1.1-ultra" # Latest Flux Pro
"fal-ai/kling-video/v1.5/pro" # Kling Video Pro
"fal-ai/hunyuan-video" # Hunyuan Video
"fal-ai/minimax-video" # MiniMax VideoUse list_models with category filters to discover more:
list_models(category="image")- All image generation modelslist_models(category="video")- All video generation modelslist_models(category="audio")- All audio modelslist_models(search="flux")- Search for specific models
📚 Documentation
Guide | Description |
Detailed setup instructions for all platforms | |
Complete tool documentation with parameters | |
Usage examples for image, video, and audio generation | |
Container deployment and configuration | |
Web-based SSE transport setup | |
Running CI locally with |
📖 Full documentation site: raveenb.github.io/fal-mcp-server
🔌 Claude Code Plugin Marketplace
This project is part of the Luminary Lane Tools marketplace for Claude Code plugins.
Add the marketplace:
/plugin marketplace add raveenb/fal-mcp-serverAvailable plugins:
Plugin | Description |
| Generate images, videos, and music using 600+ Fal.ai models |
More plugins coming soon!
🔧 Troubleshooting
Common Errors
FAL_KEY not set
Error: FAL_KEY environment variable is requiredSolution: Set your Fal.ai API key:
export FAL_KEY="your-api-key"Model not found
Error: Model 'xyz' not foundSolution: Use list_models to discover available models, or check the model ID spelling.
File not found (Docker)
Error: File not found: /Users/username/image.pngSolution: When using Docker, mount the directory as a volume. See File Upload with Docker above.
Timeout on video/music generation
Error: Generation timed out after 300sSolution: Video and music generation can take several minutes. This is normal for high-quality models. Try:
Using a faster model variant (e.g.,
schnellinstead ofpro)Reducing duration or resolution
Rate limiting
Error: Rate limit exceededSolution: Wait a few minutes and retry. Consider upgrading your Fal.ai plan for higher limits.
Debug Mode
Enable verbose logging for troubleshooting:
# Set debug environment variable
export FAL_MCP_DEBUG=true
# Run the server
fal-mcpReporting Issues
If you encounter a bug or unexpected behavior:
Check existing issues: GitHub Issues
Gather information:
Error message (full text)
Steps to reproduce
Model ID used
Environment (OS, Python version, transport mode)
Open a new issue with:
**Error:** [paste error message] **Steps to reproduce:** [what you did] **Model:** [model ID if applicable] **Environment:** [OS, Python version, Docker/uvx/pip]Include logs if available (with sensitive data removed)
🤝 Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
Local Development
We support local CI testing with act:
# Quick setup
make ci-local # Run CI locally before pushing
# See detailed guide
cat docs/LOCAL_TESTING.md📝 License
MIT License - see LICENSE file for details.
🙏 Acknowledgments
Hosted deployment
A hosted deployment is available on Fronteir AI.
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/raveenb/fal-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server