generate
Generate text completions by providing a model and prompt to an Ollama model. Supports system prompts, temperature, and max tokens for customized inference.
Instructions
Run text generation with an Ollama model. Returns the model's raw completion for a given prompt.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | ||
| prompt | Yes | ||
| system_prompt | No | ||
| temperature | No | ||
| max_tokens | No |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/foundry_reverse/server.py:153-159 (registration)The 'generate' MCP tool is registered here with the @mcp.tool decorator using name='generate'.
@mcp.tool( name="generate", description=( "Run text generation with an Ollama model. Returns the model's " "raw completion for a given prompt." ), ) - src/foundry_reverse/server.py:160-186 (handler)The handler function that executes the 'generate' tool logic. Accepts model, prompt, system_prompt, temperature, max_tokens; delegates to ollama_client.generate and returns the response.
async def generate( model: str, prompt: str, system_prompt: str | None = None, temperature: float | None = None, max_tokens: int | None = None, ) -> dict[str, Any]: """ Args: model: Ollama model name (e.g. 'llama3'). prompt: The input prompt. system_prompt: Optional system message to guide the model. temperature: Sampling temperature (0.0–2.0). Lower is more deterministic. max_tokens: Maximum tokens to generate. """ options: dict[str, Any] = {} if temperature is not None: options["temperature"] = temperature if max_tokens is not None: options["num_predict"] = max_tokens response = await oc.generate( model=model, prompt=prompt, system=system_prompt, options=options or None, ) return {"model": model, "response": response} - The underlying Ollama API client's generate function that sends a POST request to /api/generate endpoint and returns the raw response text.
async def generate( model: str, prompt: str, system: str | None = None, options: dict[str, Any] | None = None, ) -> str: payload: dict[str, Any] = { "model": model, "prompt": prompt, "stream": False, } if system: payload["system"] = system if options: payload["options"] = options async with _client() as c: r = await c.post("/api/generate", json=payload) r.raise_for_status() return r.json().get("response", "")