fal MCP Server
A Model Context Protocol (MCP) server for interacting with fal.ai models and services. This project was inspired by am0y's MCP server, but updated to use the latest streaming MCP support.
Features
- List all available fal.ai models
- Search for specific models by keywords
- Get model schemas
- Generate content using any fal.ai model
- Support for both direct and queued model execution
- Queue management (status checking, getting results, cancelling requests)
- File upload to fal.ai CDN
- Full streaming support via HTTP transport
Requirements
- Python 3.12+
- fastmcp
- httpx
- aiofiles
- A fal.ai API key
Installation
- Clone this repository:
- Install the required packages:
Usage
Running the Server Locally
- Get your fal.ai API key from fal.ai
- Start the MCP server with HTTP transport:
The server will start and display connection information in your terminal.
- Connect to it from your LLM IDE (Claude Code or Cursor) by adding to your configuration:
Development Mode (with MCP Inspector)
For testing and debugging, you can run the server in development mode:
This will:
- Start the server on a random port
- Launch the MCP Inspector web interface in your browser
- Allow you to test all tools interactively with a web UI
The Inspector URL will be displayed in the terminal (typically http://localhost:PORT
).
Environment Variables
The run_http.sh
script automatically handles all environment variables for you. If you need to customize:
PORT
: Server port for HTTP transport (default: 6274)
Setting API Key Permanently
If you prefer to set your API key permanently instead of passing it each time:
- Create a
.env
file in the project root:
- Then run the server without the API key argument:
For manual setup:
FAL_KEY
: Your fal.ai API key (required)MCP_TRANSPORT
: Transport mode -stdio
(default) orhttp
Available Tools
models(page=None, total=None)
- List available models with optional paginationsearch(keywords)
- Search for models by keywordsschema(model_id)
- Get OpenAPI schema for a specific modelgenerate(model, parameters, queue=False)
- Generate content using a modelresult(url)
- Get result from a queued requeststatus(url)
- Check status of a queued requestcancel(url)
- Cancel a queued requestupload(path)
- Upload a file to fal.ai CDN
License
This server cannot be installed
MCP server for interacting with fal.ai models and services. Uses the latest streaming MCP support.
Related MCP Servers
- -securityAlicense-qualityA simple MCP server for interacting with OpenAI assistants. This server allows other tools (like Claude Desktop) to create and interact with OpenAI assistants through the Model Context Protocol.Last updated -26PythonMIT License
- -securityAlicense-qualityMCP Server provides a simpler API to interact with the Model Context Protocol by allowing users to define custom tools and services to streamline workflows and processes.Last updated -132TypeScriptMIT License
- -securityFlicense-qualityA flexible server that enables communication between AI models and tools, supporting multiple MCP servers and compatible with Claude, MCP Dockmaster, and other MCP clients.Last updated -190TypeScript
- AsecurityAlicenseAqualityEnables AI assistants to discover, retrieve details about, and manage MCP (Model Context Protocol) servers that provide additional tools and capabilities on demand.Last updated -47241JavaScriptAGPL 3.0