export_model
Convert fine-tuned models to compatible formats (gguf, ollama, vllm, huggingface) for deployment, specifying paths, formats, and quantization bits.
Instructions
Export a fine-tuned Unsloth model to various formats
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| export_format | Yes | Format to export to (gguf, ollama, vllm, huggingface) | |
| model_path | Yes | Path to the fine-tuned model | |
| output_path | Yes | Path to save the exported model | |
| quantization_bits | No | Bits for quantization (for GGUF export) |
Implementation Reference
- src/index.ts:200-226 (registration)Registration of the 'export_model' tool including its name, description, and input schema definition.{ name: 'export_model', description: 'Export a fine-tuned Unsloth model to various formats', inputSchema: { type: 'object', properties: { model_path: { type: 'string', description: 'Path to the fine-tuned model', }, export_format: { type: 'string', description: 'Format to export to (gguf, ollama, vllm, huggingface)', enum: ['gguf', 'ollama', 'vllm', 'huggingface'], }, output_path: { type: 'string', description: 'Path to save the exported model', }, quantization_bits: { type: 'number', description: 'Bits for quantization (for GGUF export)', }, }, required: ['model_path', 'export_format', 'output_path'], }, },
- src/index.ts:552-661 (handler)Handler implementation for 'export_model' tool. Extracts parameters, constructs a Python script based on export_format ('gguf' or 'huggingface'), executes it using executeUnslothScript, parses result, and returns success/error response.case 'export_model': { const { model_path, export_format, output_path, quantization_bits = 4, } = args as { model_path: string; export_format: 'gguf' | 'ollama' | 'vllm' | 'huggingface'; output_path: string; quantization_bits?: number; }; let script = ''; if (export_format === 'gguf') { script = ` import json import os try: from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Create output directory if it doesn't exist os.makedirs(os.path.dirname("${output_path}"), exist_ok=True) # Load the model and tokenizer model = AutoModelForCausalLM.from_pretrained("${model_path}") tokenizer = AutoTokenizer.from_pretrained("${model_path}") # Save the model in GGUF format from transformers import LlamaForCausalLM import ctranslate2 # Convert to GGUF format ct_model = ctranslate2.converters.TransformersConverter( "${model_path}", "${output_path}", quantization="int${quantization_bits}" ).convert() print(json.dumps({ "success": True, "model_path": "${model_path}", "export_format": "gguf", "output_path": "${output_path}", "quantization_bits": ${quantization_bits} })) except Exception as e: print(json.dumps({"error": str(e), "success": False})) `; } else if (export_format === 'huggingface') { script = ` import json import os try: from transformers import AutoModelForCausalLM, AutoTokenizer # Create output directory if it doesn't exist os.makedirs("${output_path}", exist_ok=True) # Load the model and tokenizer model = AutoModelForCausalLM.from_pretrained("${model_path}") tokenizer = AutoTokenizer.from_pretrained("${model_path}") # Save the model in Hugging Face format model.save_pretrained("${output_path}") tokenizer.save_pretrained("${output_path}") print(json.dumps({ "success": True, "model_path": "${model_path}", "export_format": "huggingface", "output_path": "${output_path}" })) except Exception as e: print(json.dumps({"error": str(e), "success": False})) `; } else { return { content: [ { type: 'text', text: `Export format '${export_format}' is not yet implemented. Currently, only 'gguf' and 'huggingface' formats are supported.`, }, ], isError: true, }; } const result = await this.executeUnslothScript(script); try { const exportResult = JSON.parse(result); if (!exportResult.success) { throw new Error(exportResult.error); } return { content: [ { type: 'text', text: `Successfully exported model to ${export_format} format:\n\n${JSON.stringify(exportResult, null, 2)}`, }, ], }; } catch (error: any) { throw new Error(`Error exporting model: ${error.message}`); } }