finetune_model
Optimize and fine-tune large language models using Unsloth enhancements for faster training and reduced memory usage. Specify model, dataset, and parameters like LoRA rank, batch size, and learning rate for efficient customization.
Instructions
Fine-tune a model with Unsloth optimizations
Input Schema
Name | Required | Description | Default |
---|---|---|---|
batch_size | No | Batch size for training | |
dataset_name | Yes | Name of the dataset to use for fine-tuning | |
dataset_text_field | No | Field in the dataset containing the text | |
gradient_accumulation_steps | No | Number of gradient accumulation steps | |
learning_rate | No | Learning rate for training | |
load_in_4bit | No | Whether to use 4-bit quantization | |
lora_alpha | No | Alpha for LoRA fine-tuning | |
lora_rank | No | Rank for LoRA fine-tuning | |
max_seq_length | No | Maximum sequence length for training | |
max_steps | No | Maximum number of training steps | |
model_name | Yes | Name of the model to fine-tune | |
output_dir | Yes | Directory to save the fine-tuned model |