Skip to main content
Glama
OtotaO

Unsloth MCP Server

by OtotaO

generate_text

Generate text using fine-tuned Unsloth models by providing a prompt and model path, with options to control output length and creativity parameters.

Instructions

Generate text using a fine-tuned Unsloth model

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
model_pathYesPath to the fine-tuned model
promptYesPrompt for text generation
max_new_tokensNoMaximum number of tokens to generate
temperatureNoTemperature for text generation
top_pNoTop-p for text generation

Implementation Reference

  • Handler for 'generate_text' tool. Parses input arguments, constructs a Python script that loads the specified model using Transformers, sets up a text-generation pipeline with given parameters, generates text from the prompt, and returns the generated text.
              case 'generate_text': {
                const {
                  model_path,
                  prompt,
                  max_new_tokens = 256,
                  temperature = 0.7,
                  top_p = 0.9,
                } = args as {
                  model_path: string;
                  prompt: string;
                  max_new_tokens?: number;
                  temperature?: number;
                  top_p?: number;
                };
    
                const script = `
    import json
    try:
        from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
        
        # Load the model and tokenizer
        model = AutoModelForCausalLM.from_pretrained("${model_path}")
        tokenizer = AutoTokenizer.from_pretrained("${model_path}")
        
        # Create a text generation pipeline
        generator = pipeline(
            "text-generation",
            model=model,
            tokenizer=tokenizer,
            max_new_tokens=${max_new_tokens},
            temperature=${temperature},
            top_p=${top_p},
            do_sample=True
        )
        
        # Generate text
        result = generator("${prompt.replace(/"/g, '\\"')}")
        
        print(json.dumps({
            "success": True,
            "prompt": "${prompt.replace(/"/g, '\\"')}",
            "generated_text": result[0]["generated_text"]
        }))
    except Exception as e:
        print(json.dumps({"error": str(e), "success": False}))
    `;
                const result = await this.executeUnslothScript(script);
                
                try {
                  const generationResult = JSON.parse(result);
                  if (!generationResult.success) {
                    throw new Error(generationResult.error);
                  }
                  
                  return {
                    content: [
                      {
                        type: 'text',
                        text: `Generated text:\n\n${generationResult.generated_text}`,
                      },
                    ],
                  };
                } catch (error: any) {
                  throw new Error(`Error generating text: ${error.message}`);
                }
              }
  • Input schema defining the parameters for the generate_text tool: model_path (required), prompt (required), max_new_tokens, temperature, top_p.
    inputSchema: {
      type: 'object',
      properties: {
        model_path: {
          type: 'string',
          description: 'Path to the fine-tuned model',
        },
        prompt: {
          type: 'string',
          description: 'Prompt for text generation',
        },
        max_new_tokens: {
          type: 'number',
          description: 'Maximum number of tokens to generate',
        },
        temperature: {
          type: 'number',
          description: 'Temperature for text generation',
        },
        top_p: {
          type: 'number',
          description: 'Top-p for text generation',
        },
      },
      required: ['model_path', 'prompt'],
    },
  • src/index.ts:170-199 (registration)
    Registration of the 'generate_text' tool in the ListTools response, including name, description, and input schema.
    {
      name: 'generate_text',
      description: 'Generate text using a fine-tuned Unsloth model',
      inputSchema: {
        type: 'object',
        properties: {
          model_path: {
            type: 'string',
            description: 'Path to the fine-tuned model',
          },
          prompt: {
            type: 'string',
            description: 'Prompt for text generation',
          },
          max_new_tokens: {
            type: 'number',
            description: 'Maximum number of tokens to generate',
          },
          temperature: {
            type: 'number',
            description: 'Temperature for text generation',
          },
          top_p: {
            type: 'number',
            description: 'Top-p for text generation',
          },
        },
        required: ['model_path', 'prompt'],
      },
    },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OtotaO/unsloth-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server