# Imagen MCP Server
A Model Context Protocol (MCP) server for image generation using Google's Imagen model and other models supported by the Nexos.ai platform.
## Features
- **Simple Image Generation**: Generate a single image from a text prompt
- **Batch Image Generation**: Generate multiple images with background processing
- First image is returned immediately
- Remaining images are generated in the background
- Query for additional images as they become available
- **Model Catalog**: Access comprehensive information about all available models
## Supported Models
| Model | Provider | Description |
|-------|----------|-------------|
| `imagen-4` | Google | Flagship model with excellent prompt following and photorealistic output |
| `imagen-4-fast` | Google | Faster variant optimized for speed |
| `imagen-4-ultra` | Google | Highest quality for premium image generation |
| `dall-e-3` | OpenAI | High-quality model with excellent artistic capabilities |
| `gpt-image-1` | OpenAI | Strong prompt understanding and versatile output |
## Installation
### Option 1: Install with pipx (Recommended for CLI usage)
```bash
# Install directly from the repository
pipx install git+https://github.com/your-username/Imagen-MCP.git
# Or install from local directory
cd Imagen-MCP
pipx install .
# Run the server
imagen-mcp
```
### Option 2: Install with Poetry (Recommended for development)
```bash
# Clone the repository
git clone <repository-url>
cd Imagen-MCP
# Install dependencies with Poetry
poetry install
# Run the server
poetry run imagen-mcp
# Or
poetry run python -m Imagen_MCP.server
```
### Option 3: Install with pip
```bash
# Install from the repository
pip install git+https://github.com/your-username/Imagen-MCP.git
# Or install from local directory
pip install .
# Run the server
imagen-mcp
```
### Environment Variables
Set up your Nexos.ai API key:
```bash
export NEXOS_API_KEY=your-api-key-here
```
Or create a `.env` file:
```env
NEXOS_API_KEY=your-api-key-here
```
## Usage
### Running the Server
```bash
# If installed with pipx or pip
imagen-mcp
# If using Poetry (development)
poetry run imagen-mcp
# Alternative: run as Python module
poetry run python -m Imagen_MCP.server
# With FastMCP CLI (more options)
poetry run fastmcp run Imagen_MCP/server.py --transport http --port 8000
```
### CLI Options
When using the `fastmcp run` command, you have additional options:
| Option | Description |
|--------|-------------|
| `--transport`, `-t` | Transport protocol: `stdio` (default), `http`, `sse`, `streamable-http` |
| `--host` | Host to bind to (default: 127.0.0.1) |
| `--port`, `-p` | Port for HTTP/SSE transport (default: 8000) |
| `--log-level`, `-l` | Log level: DEBUG, INFO, WARNING, ERROR, CRITICAL |
| `--no-banner` | Don't show the server banner |
### MCP Client Configuration
To use this MCP server with an AI agent, add the following configuration to your MCP client.
#### Claude Desktop (pipx installation)
If you installed with pipx, add to your Claude Desktop configuration file (`~/.config/claude/claude_desktop_config.json` on Linux, `~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
```json
{
"mcpServers": {
"imagen": {
"command": "imagen-mcp",
"env": {
"NEXOS_API_KEY": "your-nexos-api-key-here"
}
}
}
}
```
#### Claude Desktop (Poetry installation)
If you're using Poetry for development:
```json
{
"mcpServers": {
"imagen": {
"command": "poetry",
"args": ["run", "imagen-mcp"],
"cwd": "/path/to/Imagen-MCP",
"env": {
"NEXOS_API_KEY": "your-nexos-api-key-here"
}
}
}
}
```
#### Cline / Roo Code
Add to your VS Code settings or Cline MCP configuration:
```json
{
"mcpServers": {
"imagen": {
"command": "imagen-mcp",
"env": {
"NEXOS_API_KEY": "your-nexos-api-key-here"
}
}
}
}
```
#### Generic MCP Client (Copy-Paste Ready)
For pipx/pip installation:
```json
{
"imagen": {
"command": "imagen-mcp",
"env": {
"NEXOS_API_KEY": "your-nexos-api-key-here"
}
}
}
```
For Poetry installation:
```json
{
"imagen": {
"command": "poetry",
"args": ["run", "imagen-mcp"],
"cwd": "/path/to/Imagen-MCP",
"env": {
"NEXOS_API_KEY": "your-nexos-api-key-here"
}
}
}
```
**Configuration Options:**
| Field | Description |
|-------|-------------|
| `command` | The command to run (`poetry` for Poetry-managed projects) |
| `args` | Command arguments to start the MCP server |
| `cwd` | Working directory - set to your Imagen-MCP installation path |
| `env` | Environment variables, including the required `NEXOS_API_KEY` |
**Important:** Replace `/path/to/Imagen-MCP` with the actual path to your Imagen-MCP installation and `your-nexos-api-key-here` with your Nexos.ai API key.
#### Alternative: Using pip-installed package
If you install the package globally or in a virtual environment:
```json
{
"imagen": {
"command": "python",
"args": ["-m", "Imagen_MCP.server"],
"env": {
"NEXOS_API_KEY": "your-nexos-api-key-here"
}
}
}
```
## Tools
### `list_models`
List all available image generation models with their descriptions, capabilities, and use cases.
**Parameters:** None
**Returns:**
- `models`: List of all available models with details
- `total_count`: Number of available models
- `default_model`: The default model ID
- `usage_hint`: How to use the model parameter
**Example Response:**
```json
{
"models": [
{
"id": "imagen-4",
"name": "Imagen 4",
"provider": "Google",
"description": "Google's flagship image generation model...",
"use_cases": ["Photorealistic image generation", ...],
"strengths": ["Excellent prompt adherence", ...],
"weaknesses": ["Slower generation time", ...],
"supported_sizes": ["256x256", "512x512", "1024x1024", ...],
"max_images_per_request": 4,
"supports_hd_quality": true,
"rate_limit": "100 messages per 3 hours"
},
...
],
"total_count": 5,
"default_model": "imagen-4"
}
```
### `get_model_details`
Get detailed information about a specific image generation model.
**Parameters:**
- `model_id` (required): The model identifier (e.g., "imagen-4", "imagen-4-fast", "dall-e-3")
**Returns:**
- Complete model details including capabilities, rate limits, use cases, strengths, and weaknesses
- Error message if model not found
**Example:**
```python
result = get_model_details(model_id="imagen-4-fast")
```
### `generate_image`
Generate a single image from a text prompt. The image is saved to a file (temporary file if no path specified).
**Parameters:**
- `prompt` (required): Text description of the image to generate
- `model` (optional): Model to use (default: "imagen-4")
- `size` (optional): Image size (default: "1024x1024")
- `quality` (optional): Image quality - "standard" or "hd" (default: "standard")
- `style` (optional): Image style - "vivid" or "natural" (default: "vivid")
**Returns:**
- `success`: Whether the image was generated successfully
- `file_path`: Absolute path to the saved image file
- `file_size_bytes`: Size of the saved image file in bytes
- `model_used`: The model that was used for generation
- `revised_prompt`: The revised prompt (if the model modified it)
- `error`: Error message if generation failed
**Example:**
```python
result = await generate_image(
prompt="A serene mountain landscape at sunset",
model="imagen-4",
size="1024x1024",
quality="hd",
style="natural"
)
if result.success:
print(f"Image saved to: {result.file_path}")
print(f"File size: {result.file_size_bytes} bytes")
```
### `start_image_batch`
Start generating multiple images and return the first one immediately. Images are saved to files (in a temporary directory if no path specified).
**Parameters:**
- `prompt` (required): Text description of the image to generate
- `count` (optional): Number of images to generate, 2-10 (default: 4)
- `model` (optional): Model to use (default: "imagen-4")
- `size` (optional): Image size (default: "1024x1024")
- `quality` (optional): Image quality (default: "standard")
- `style` (optional): Image style (default: "vivid")
**Returns:**
- `success`: Whether the batch was started successfully
- `session_id`: ID for retrieving more images
- `first_image_path`: Path to the first generated image file
- `first_image_size_bytes`: Size of the first image file in bytes
- `pending_count`: Number of images still being generated
- `error`: Error message if batch failed to start
**Example:**
```python
result = await start_image_batch(
prompt="A futuristic cityscape",
count=5,
model="imagen-4"
)
if result.success:
print(f"Session ID: {result.session_id}")
print(f"First image: {result.first_image_path}")
```
### `get_next_image`
Get the next available image from a batch generation session. The image is saved to a file (temporary file if no path specified).
**Parameters:**
- `session_id` (required): Session ID from start_image_batch
- `timeout` (optional): Maximum wait time in seconds (default: 60)
**Returns:**
- `success`: Whether an image was retrieved
- `file_path`: Path to the saved image file (or null if no image available)
- `file_size_bytes`: Size of the saved image file in bytes
- `has_more`: Whether more images are available or pending
- `pending_count`: Number of images still being generated
- `error`: Error message if retrieval failed
**Example:**
```python
while True:
result = await get_next_image(session_id=session_id)
if result.file_path:
print(f"Image saved to: {result.file_path}")
if not result.has_more:
break
```
### `get_batch_status`
Get the current status of a batch generation session.
**Parameters:**
- `session_id` (required): Session ID from start_image_batch
**Returns:**
- `status`: Session status (created, generating, partial, completed, failed)
- `completed_count`: Number of completed images
- `pending_count`: Number of pending images
- `total_count`: Total number of requested images
- `errors`: List of any errors encountered
## Resources
### `models://image-generation`
Get the complete catalog of available image generation models with their capabilities, rate limits, use cases, strengths, and weaknesses.
### `models://image-generation/{model_id}`
Get detailed information about a specific model.
## Development
### Running Tests
```bash
# Run all tests
poetry run pytest
# Run with verbose output
poetry run pytest -v
# Run specific test file
poetry run pytest tests/unit/test_generate_image.py
```
### Project Structure
```
Imagen_MCP/
├── __init__.py # Package exports
├── server.py # FastMCP server definition
├── config.py # Configuration management
├── constants.py # Constants and type definitions
├── exceptions.py # Custom exceptions
├── tools/
│ ├── generate_image.py # Simple image generation tool
│ └── batch_generate.py # Batch generation tools
├── resources/
│ └── models.py # Model catalog resource
├── services/
│ ├── nexos_client.py # Nexos.ai API client
│ ├── session_manager.py # Background generation session manager
│ └── model_registry.py # Model information registry
└── models/
├── image.py # Image data models
├── generation.py # Generation request/response models
└── session.py # Session state models
```
## Rate Limits
All models are in Category 3 on Nexos.ai:
- 100 messages per 3 hours
## License
MIT License