run
Execute AI models locally by specifying a model name and prompt through the Ollama MCP Server, enabling efficient integration into MCP-powered applications.
Instructions
Run a model
Input Schema
Name | Required | Description | Default |
---|---|---|---|
name | Yes | Name of the model | |
prompt | Yes | Prompt to send to the model | |
timeout | No | Timeout in milliseconds (default: 60000) |