Skip to main content
Glama
OtotaO

Unsloth MCP Server

by OtotaO

export_model

Export fine-tuned Unsloth models to GGUF, Ollama, vLLM, or Hugging Face formats for deployment, supporting quantization to optimize file size and performance.

Instructions

Export a fine-tuned Unsloth model to various formats

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
model_pathYesPath to the fine-tuned model
export_formatYesFormat to export to (gguf, ollama, vllm, huggingface)
output_pathYesPath to save the exported model
quantization_bitsNoBits for quantization (for GGUF export)

Implementation Reference

  • Main execution logic for the export_model tool. Parses arguments, generates format-specific Python scripts (using transformers and ctranslate2 for GGUF, transformers for HuggingFace), executes via executeUnslothScript, handles response.
              case 'export_model': {
                const {
                  model_path,
                  export_format,
                  output_path,
                  quantization_bits = 4,
                } = args as {
                  model_path: string;
                  export_format: 'gguf' | 'ollama' | 'vllm' | 'huggingface';
                  output_path: string;
                  quantization_bits?: number;
                };
    
                let script = '';
                
                if (export_format === 'gguf') {
                  script = `
    import json
    import os
    try:
        from transformers import AutoModelForCausalLM, AutoTokenizer
        import torch
        
        # Create output directory if it doesn't exist
        os.makedirs(os.path.dirname("${output_path}"), exist_ok=True)
        
        # Load the model and tokenizer
        model = AutoModelForCausalLM.from_pretrained("${model_path}")
        tokenizer = AutoTokenizer.from_pretrained("${model_path}")
        
        # Save the model in GGUF format
        from transformers import LlamaForCausalLM
        import ctranslate2
        
        # Convert to GGUF format
        ct_model = ctranslate2.converters.TransformersConverter(
            "${model_path}",
            "${output_path}",
            quantization="int${quantization_bits}"
        ).convert()
        
        print(json.dumps({
            "success": True,
            "model_path": "${model_path}",
            "export_format": "gguf",
            "output_path": "${output_path}",
            "quantization_bits": ${quantization_bits}
        }))
    except Exception as e:
        print(json.dumps({"error": str(e), "success": False}))
    `;
                } else if (export_format === 'huggingface') {
                  script = `
    import json
    import os
    try:
        from transformers import AutoModelForCausalLM, AutoTokenizer
        
        # Create output directory if it doesn't exist
        os.makedirs("${output_path}", exist_ok=True)
        
        # Load the model and tokenizer
        model = AutoModelForCausalLM.from_pretrained("${model_path}")
        tokenizer = AutoTokenizer.from_pretrained("${model_path}")
        
        # Save the model in Hugging Face format
        model.save_pretrained("${output_path}")
        tokenizer.save_pretrained("${output_path}")
        
        print(json.dumps({
            "success": True,
            "model_path": "${model_path}",
            "export_format": "huggingface",
            "output_path": "${output_path}"
        }))
    except Exception as e:
        print(json.dumps({"error": str(e), "success": False}))
    `;
                } else {
                  return {
                    content: [
                      {
                        type: 'text',
                        text: `Export format '${export_format}' is not yet implemented. Currently, only 'gguf' and 'huggingface' formats are supported.`,
                      },
                    ],
                    isError: true,
                  };
                }
                
                const result = await this.executeUnslothScript(script);
                
                try {
                  const exportResult = JSON.parse(result);
                  if (!exportResult.success) {
                    throw new Error(exportResult.error);
                  }
                  
                  return {
                    content: [
                      {
                        type: 'text',
                        text: `Successfully exported model to ${export_format} format:\n\n${JSON.stringify(exportResult, null, 2)}`,
                      },
                    ],
                  };
                } catch (error: any) {
                  throw new Error(`Error exporting model: ${error.message}`);
                }
              }
  • src/index.ts:200-226 (registration)
    Tool registration in MCP server.setTools(), defining name, description, and input schema for validation.
    {
      name: 'export_model',
      description: 'Export a fine-tuned Unsloth model to various formats',
      inputSchema: {
        type: 'object',
        properties: {
          model_path: {
            type: 'string',
            description: 'Path to the fine-tuned model',
          },
          export_format: {
            type: 'string',
            description: 'Format to export to (gguf, ollama, vllm, huggingface)',
            enum: ['gguf', 'ollama', 'vllm', 'huggingface'],
          },
          output_path: {
            type: 'string',
            description: 'Path to save the exported model',
          },
          quantization_bits: {
            type: 'number',
            description: 'Bits for quantization (for GGUF export)',
          },
        },
        required: ['model_path', 'export_format', 'output_path'],
      },
    },
  • Input schema defining parameters, types, descriptions, enum for export_format, and required fields.
    inputSchema: {
      type: 'object',
      properties: {
        model_path: {
          type: 'string',
          description: 'Path to the fine-tuned model',
        },
        export_format: {
          type: 'string',
          description: 'Format to export to (gguf, ollama, vllm, huggingface)',
          enum: ['gguf', 'ollama', 'vllm', 'huggingface'],
        },
        output_path: {
          type: 'string',
          description: 'Path to save the exported model',
        },
        quantization_bits: {
          type: 'number',
          description: 'Bits for quantization (for GGUF export)',
        },
      },
      required: ['model_path', 'export_format', 'output_path'],
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool exports models but doesn't describe what the export entails (e.g., file creation, format conversion, potential data loss, permissions required, or rate limits). For a tool with 4 parameters and no annotations, this leaves significant gaps in understanding its behavior and side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Export a fine-tuned Unsloth model') and adds necessary detail ('to various formats'). There is no wasted verbiage, and it's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with full schema coverage but no annotations and no output schema, the description is minimally adequate. It covers the basic purpose but lacks behavioral context, usage guidelines, and output details. For a tool that likely involves file operations and format conversions, more completeness would be beneficial, but it meets a bare minimum.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by mentioning 'various formats,' which aligns with the 'export_format' enum but doesn't provide additional syntax or usage details. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Export') and resource ('a fine-tuned Unsloth model'), specifying the target formats ('various formats'). It distinguishes from siblings like 'finetune_model' or 'load_model' by focusing on export rather than creation or loading. However, it doesn't explicitly differentiate from all siblings (e.g., 'generate_text' is clearly different, but the distinction is implicit rather than explicit).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a fine-tuned model first), exclusions (e.g., not for raw models), or comparisons to sibling tools like 'list_supported_models' for checking export options. Usage is implied by the action but lacks explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OtotaO/unsloth-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server