Skip to main content
Glama
OtotaO
by OtotaO

generate_text

Generate high-quality text using a fine-tuned Unsloth model by specifying a prompt, model path, and optional parameters like temperature, top-p, and max tokens for tailored outputs.

Instructions

Generate text using a fine-tuned Unsloth model

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
max_new_tokensNoMaximum number of tokens to generate
model_pathYesPath to the fine-tuned model
promptYesPrompt for text generation
temperatureNoTemperature for text generation
top_pNoTop-p for text generation

Implementation Reference

  • The handler function for the 'generate_text' tool. It destructures arguments, constructs a Python script to load the model using transformers.pipeline and generate text, executes it via executeUnslothScript, parses the result, and returns the generated text.
    case 'generate_text': { const { model_path, prompt, max_new_tokens = 256, temperature = 0.7, top_p = 0.9, } = args as { model_path: string; prompt: string; max_new_tokens?: number; temperature?: number; top_p?: number; }; const script = ` import json try: from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Load the model and tokenizer model = AutoModelForCausalLM.from_pretrained("${model_path}") tokenizer = AutoTokenizer.from_pretrained("${model_path}") # Create a text generation pipeline generator = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=${max_new_tokens}, temperature=${temperature}, top_p=${top_p}, do_sample=True ) # Generate text result = generator("${prompt.replace(/"/g, '\\"')}") print(json.dumps({ "success": True, "prompt": "${prompt.replace(/"/g, '\\"')}", "generated_text": result[0]["generated_text"] })) except Exception as e: print(json.dumps({"error": str(e), "success": False})) `; const result = await this.executeUnslothScript(script); try { const generationResult = JSON.parse(result); if (!generationResult.success) { throw new Error(generationResult.error); } return { content: [ { type: 'text', text: `Generated text:\n\n${generationResult.generated_text}`, }, ], }; } catch (error: any) { throw new Error(`Error generating text: ${error.message}`); } }
  • Input schema defining the parameters for the generate_text tool, including model_path, prompt (required), and optional generation parameters.
    inputSchema: { type: 'object', properties: { model_path: { type: 'string', description: 'Path to the fine-tuned model', }, prompt: { type: 'string', description: 'Prompt for text generation', }, max_new_tokens: { type: 'number', description: 'Maximum number of tokens to generate', }, temperature: { type: 'number', description: 'Temperature for text generation', }, top_p: { type: 'number', description: 'Top-p for text generation', }, }, required: ['model_path', 'prompt'], },
  • src/index.ts:170-199 (registration)
    Registration of the generate_text tool in the list of tools returned by ListToolsRequestSchema, including name, description, and inputSchema.
    { name: 'generate_text', description: 'Generate text using a fine-tuned Unsloth model', inputSchema: { type: 'object', properties: { model_path: { type: 'string', description: 'Path to the fine-tuned model', }, prompt: { type: 'string', description: 'Prompt for text generation', }, max_new_tokens: { type: 'number', description: 'Maximum number of tokens to generate', }, temperature: { type: 'number', description: 'Temperature for text generation', }, top_p: { type: 'number', description: 'Top-p for text generation', }, }, required: ['model_path', 'prompt'], }, },

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OtotaO/unsloth-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server