finetune_model
Optimize and fine-tune large language models using Unsloth MCP Server. Input model and dataset, configure parameters like LoRA rank and batch size, and save results for improved training efficiency.
Instructions
Fine-tune a model with Unsloth optimizations
Input Schema
Name | Required | Description | Default |
---|---|---|---|
batch_size | No | Batch size for training | |
dataset_name | Yes | Name of the dataset to use for fine-tuning | |
dataset_text_field | No | Field in the dataset containing the text | |
gradient_accumulation_steps | No | Number of gradient accumulation steps | |
learning_rate | No | Learning rate for training | |
load_in_4bit | No | Whether to use 4-bit quantization | |
lora_alpha | No | Alpha for LoRA fine-tuning | |
lora_rank | No | Rank for LoRA fine-tuning | |
max_seq_length | No | Maximum sequence length for training | |
max_steps | No | Maximum number of training steps | |
model_name | Yes | Name of the model to fine-tune | |
output_dir | Yes | Directory to save the fine-tuned model |