Skip to main content
Glama
rawveg

Ollama MCP Server

ollama_generate

Generate text completions from prompts using local LLM models for single-turn tasks like content creation or data formatting.

Instructions

Generate completion from a prompt. Simpler than chat, useful for single-turn completions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYesName of the model to use
promptYesThe prompt to generate from
optionsNoGeneration options (optional). Provide as JSON object with settings like temperature, top_p, etc.
formatNojson

Implementation Reference

  • Exports the ToolDefinition object that registers the 'ollama_generate' tool with the autoloader, including name, description, JSON input schema for MCP protocol, and the handler function.
    export const toolDefinition: ToolDefinition = {
      name: 'ollama_generate',
      description:
        'Generate completion from a prompt. Simpler than chat, useful for single-turn completions.',
      inputSchema: {
        type: 'object',
        properties: {
          model: {
            type: 'string',
            description: 'Name of the model to use',
          },
          prompt: {
            type: 'string',
            description: 'The prompt to generate from',
          },
          options: {
            type: 'string',
            description: 'Generation options (optional). Provide as JSON object with settings like temperature, top_p, etc.',
          },
          format: {
            type: 'string',
            enum: ['json', 'markdown'],
            default: 'json',
          },
        },
        required: ['model', 'prompt'],
      },
      handler: async (ollama: Ollama, args: Record<string, unknown>, format: ResponseFormat) => {
        const validated = GenerateInputSchema.parse(args);
        return generateWithModel(
          ollama,
          validated.model,
          validated.prompt,
          validated.options || {},
          format
        );
      },
    };
  • Handler function executed by the MCP server for ollama_generate calls. Validates arguments using Zod schema and delegates to generateWithModel helper.
    handler: async (ollama: Ollama, args: Record<string, unknown>, format: ResponseFormat) => {
      const validated = GenerateInputSchema.parse(args);
      return generateWithModel(
        ollama,
        validated.model,
        validated.prompt,
        validated.options || {},
        format
      );
    },
  • Zod schema for validating and parsing inputs to the ollama_generate tool, used in the handler.
    /**
     * Schema for ollama_generate tool
     */
    export const GenerateInputSchema = z.object({
      model: z.string().min(1),
      prompt: z.string(),
      options: parseJsonOrDefault({}).pipe(GenerationOptionsSchema),
      format: ResponseFormatSchema.default('json'),
      stream: z.boolean().default(false),
    });
  • Core helper function that performs the actual Ollama generate API call and formats the response.
    /**
     * Generate completion from a prompt
     */
    export async function generateWithModel(
      ollama: Ollama,
      model: string,
      prompt: string,
      options: GenerationOptions,
      format: ResponseFormat
    ): Promise<string> {
      const response = await ollama.generate({
        model,
        prompt,
        options,
        format: format === ResponseFormat.JSON ? 'json' : undefined,
        stream: false,
      });
    
      return formatResponse(response.response, format);
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rawveg/ollama-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server