Skip to main content
Glama

generate_multimodal_response

Generate AI responses using text prompts combined with images or files from multiple providers to create multimodal content and analysis.

Instructions

Generate response from multimodal model.

Args: model: Model name to use prompt: Text prompt image_urls: Optional list of image URLs file_paths: Optional list of file paths system_prompt: Optional system prompt max_tokens: Maximum tokens to generate temperature: Generation temperature provider: Optional provider name (openai, dashscope) Returns: Generated response text

Input Schema

NameRequiredDescriptionDefault
modelYes
promptYes
image_urlsNo
file_pathsNo
system_promptNo
max_tokensNo
temperatureNo
providerNo

Input Schema (JSON Schema)

{ "properties": { "file_paths": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "File Paths" }, "image_urls": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Image Urls" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": 1000, "title": "Max Tokens" }, "model": { "title": "Model", "type": "string" }, "prompt": { "title": "Prompt", "type": "string" }, "provider": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Provider" }, "system_prompt": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "System Prompt" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": 0.7, "title": "Temperature" } }, "required": [ "model", "prompt" ], "type": "object" }

Other Tools from VLLM MCP Server

Related Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/StanleyChanH/vllm-mcp'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server