Skip to main content
Glama

Unsloth MCP Server

by OtotaO

finetune_model

Optimize and fine-tune large language models using Unsloth enhancements for faster training and reduced memory usage. Specify model, dataset, and parameters like LoRA rank, batch size, and learning rate for efficient customization.

Instructions

Fine-tune a model with Unsloth optimizations

Input Schema

NameRequiredDescriptionDefault
batch_sizeNoBatch size for training
dataset_nameYesName of the dataset to use for fine-tuning
dataset_text_fieldNoField in the dataset containing the text
gradient_accumulation_stepsNoNumber of gradient accumulation steps
learning_rateNoLearning rate for training
load_in_4bitNoWhether to use 4-bit quantization
lora_alphaNoAlpha for LoRA fine-tuning
lora_rankNoRank for LoRA fine-tuning
max_seq_lengthNoMaximum sequence length for training
max_stepsNoMaximum number of training steps
model_nameYesName of the model to fine-tune
output_dirYesDirectory to save the fine-tuned model

Input Schema (JSON Schema)

{ "properties": { "batch_size": { "description": "Batch size for training", "type": "number" }, "dataset_name": { "description": "Name of the dataset to use for fine-tuning", "type": "string" }, "dataset_text_field": { "description": "Field in the dataset containing the text", "type": "string" }, "gradient_accumulation_steps": { "description": "Number of gradient accumulation steps", "type": "number" }, "learning_rate": { "description": "Learning rate for training", "type": "number" }, "load_in_4bit": { "description": "Whether to use 4-bit quantization", "type": "boolean" }, "lora_alpha": { "description": "Alpha for LoRA fine-tuning", "type": "number" }, "lora_rank": { "description": "Rank for LoRA fine-tuning", "type": "number" }, "max_seq_length": { "description": "Maximum sequence length for training", "type": "number" }, "max_steps": { "description": "Maximum number of training steps", "type": "number" }, "model_name": { "description": "Name of the model to fine-tune", "type": "string" }, "output_dir": { "description": "Directory to save the fine-tuned model", "type": "string" } }, "required": [ "model_name", "dataset_name", "output_dir" ], "type": "object" }
Install Server

Other Tools from Unsloth MCP Server

Related Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/OtotaO/unsloth-mcp-server'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server