Skip to main content
Glama
samhavens

Databricks MCP Server

by samhavens

upload_file_to_volume

Upload local files to Databricks Unity Catalog volumes for data processing. Supports large files with progress tracking and error handling.

Instructions

Upload a local file to a Databricks Unity Catalog volume.

Args:
    local_file_path: Path to local file (e.g. './data/products.json')
    volume_path: Full volume path (e.g. '/Volumes/catalog/schema/volume/file.json')
    overwrite: Whether to overwrite existing file (default: False)

Returns:
    JSON with upload results including success status, file size in MB, and upload time.
    
Example:
    # Upload large dataset to volume
    result = upload_file_to_volume(
        local_file_path='./stark_export/products_full.json',
        volume_path='/Volumes/kbqa/stark_mas_eval/stark_raw_data/products_full.json',
        overwrite=True
    )
    
Note: Handles large files (multi-GB) with progress tracking and proper error handling.
Perfect for uploading extracted datasets to Unity Catalog volumes for processing.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
local_file_pathYes
volume_pathYes
overwriteNo

Implementation Reference

  • MCP tool registration using @mcp.tool() decorator. This is the entrypoint for the 'upload_file_to_volume' tool, which wraps the core API function and serializes results to JSON.
    @mcp.tool()
    async def upload_file_to_volume(
        local_file_path: str,
        volume_path: str,
        overwrite: bool = False
    ) -> str:
        """
        Upload a local file to a Databricks Unity Catalog volume.
    
        Args:
            local_file_path: Path to local file (e.g. './data/products.json')
            volume_path: Full volume path (e.g. '/Volumes/catalog/schema/volume/file.json')
            overwrite: Whether to overwrite existing file (default: False)
    
        Returns:
            JSON with upload results including success status, file size in MB, and upload time.
            
        Example:
            # Upload large dataset to volume
            result = upload_file_to_volume(
                local_file_path='./stark_export/products_full.json',
                volume_path='/Volumes/kbqa/stark_mas_eval/stark_raw_data/products_full.json',
                overwrite=True
            )
            
        Note: Handles large files (multi-GB) with progress tracking and proper error handling.
        Perfect for uploading extracted datasets to Unity Catalog volumes for processing.
        """
        logger.info(f"Uploading file from {local_file_path} to volume: {volume_path}")
        try:
            result = await volumes.upload_file_to_volume(
                local_file_path=local_file_path,
                volume_path=volume_path,
                overwrite=overwrite
            )
            return json.dumps(result)
        except Exception as e:
            logger.error(f"Error uploading file to volume: {str(e)}")
            return json.dumps({
                "success": False,
                "error": str(e),
                "volume_path": volume_path
            })
  • Core handler function that implements the file upload logic to Databricks Unity Catalog volumes using the Databricks SDK's WorkspaceClient.files.upload method, including file reading, upload, metrics, and error handling.
    async def upload_file_to_volume(
        local_file_path: str,
        volume_path: str,
        overwrite: bool = False
    ) -> Dict[str, Any]:
        """
        Upload a local file to a Databricks Unity Catalog volume.
        
        Args:
            local_file_path: Path to local file to upload
            volume_path: Full volume path (e.g. '/Volumes/catalog/schema/volume/file.json')
            overwrite: Whether to overwrite existing file (default: False)
            
        Returns:
            Dict containing upload results with success status, file size, and timing
            
        Raises:
            FileNotFoundError: If local file doesn't exist
        """
        start_time = time.time()
        
        if not os.path.exists(local_file_path):
            raise FileNotFoundError(f"Local file not found: {local_file_path}")
        
        # Get file size
        file_size = os.path.getsize(local_file_path)
        file_size_mb = file_size / (1024 * 1024)
        
        logger.info(f"Uploading {file_size_mb:.1f}MB from {local_file_path} to {volume_path}")
        
        try:
            # Use Databricks SDK for upload
            w = _get_workspace_client()
            
            # Read file content
            with open(local_file_path, 'rb') as f:
                file_content = f.read()
            
            # Upload using SDK - handles authentication, chunking, retries automatically
            w.files.upload(
                file_path=volume_path,
                contents=file_content,
                overwrite=overwrite
            )
            
            end_time = time.time()
            upload_time = end_time - start_time
            
            return {
                "success": True,
                "file_size_mb": round(file_size_mb, 1),
                "upload_time_seconds": round(upload_time, 1),
                "volume_path": volume_path,
                "file_size_bytes": file_size
            }
            
        except Exception as e:
            logger.error(f"Error uploading file to volume: {str(e)}")
            end_time = time.time()
            upload_time = end_time - start_time
            
            return {
                "success": False,
                "error": str(e),
                "file_size_mb": round(file_size_mb, 1),
                "failed_after_seconds": round(upload_time, 1),
                "volume_path": volume_path
            }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: handles large files (multi-GB), includes progress tracking, provides proper error handling, and describes the return format (JSON with success status, file size, upload time). It doesn't mention authentication requirements or rate limits, but covers most operational aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Args, Returns, Example, Note), front-loads the core purpose, and every sentence adds value. It's appropriately sized for a 3-parameter tool with no annotations, avoiding both verbosity and under-specification.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a file upload operation with no annotations and no output schema, the description provides complete context: clear purpose, detailed parameter semantics, return format description, example usage, and behavioral notes about large file handling. It leaves no significant gaps for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed parameter explanations in the Args section, including examples for both path parameters and default value for overwrite. It adds substantial meaning beyond the bare schema, making all three parameters completely understandable.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Upload a local file') and target resource ('to a Databricks Unity Catalog volume'), distinguishing it from sibling tools like 'upload_file_to_dbfs'. It provides a complete verb+resource+scope statement that leaves no ambiguity about the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance with 'Perfect for uploading extracted datasets to Unity Catalog volumes for processing' and distinguishes it from alternatives by specifying the target (Unity Catalog volumes vs. DBFS). The example and note further clarify appropriate use cases, including handling large files with progress tracking.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/samhavens/databricks-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server