Skip to main content
Glama

download_files

Download multiple files from URLs to your local filesystem with configurable options for output directory, timeout, and file size limits.

Instructions

Download multiple files from URLs and save to local filesystem.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlsYesList of URLs to download
output_dirNoDirectory to save downloaded files
timeoutNoDownload timeout in seconds
max_size_mbNoMaximum file size in MB (default: 500)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultsYesList of download results
failed_countYesNumber of failed downloads
success_countYesNumber of successful downloads

Implementation Reference

  • Main handler function for the 'download_files' MCP tool. Decorated with @mcp.tool for automatic registration. Handles multiple URL downloads with concurrency limits, security validations (SSRF protection, path traversal), file size checks, sanitization, and returns structured results.
    @mcp.tool(description="Download multiple files from URLs and save to local filesystem.")
    async def download_files(
        urls: Annotated[list[str], Field(description="List of URLs to download")],
        output_dir: Annotated[
            str | None, Field(description="Directory to save downloaded files")
        ] = None,
        timeout: Annotated[int, Field(description="Download timeout in seconds", ge=1, le=300)] = 60,
        max_size_mb: Annotated[
            int, Field(description="Maximum file size in MB (default: 500)", ge=1, le=5000)
        ] = MAX_FILE_SIZE_MB,
    ) -> DownloadResponse:
        """Download files from URLs and save to the local filesystem.
    
        Args:
            urls: List of URLs to download
            output_dir: Directory to save the files (defaults to ~/Downloads/mcp_downloads)
            timeout: Download timeout in seconds (1-300)
            max_size_mb: Maximum file size in MB (1-5000)
    
        Returns:
            DownloadResponse with results for each file
        """
        if output_dir is None:
            output_dir = str(DEFAULT_DOWNLOAD_DIR)
    
        # Limit number of URLs per request
        if len(urls) > MAX_URLS_PER_REQUEST:
            raise ValueError(f"Maximum {MAX_URLS_PER_REQUEST} URLs per request")
    
        # Use semaphore to limit concurrent downloads
        semaphore = asyncio.Semaphore(MAX_CONCURRENT_DOWNLOADS)
    
        async def download_with_limit(url: str) -> DownloadResult:
            async with semaphore:
                return await _download_single_file_internal(url, output_dir, None, timeout, max_size_mb)
    
        # Download all files with concurrency limit
        tasks = [download_with_limit(url) for url in urls]
        results = await asyncio.gather(*tasks, return_exceptions=False)
    
        success_count = sum(1 for r in results if r.success)
        failed_count = len(results) - success_count
    
        return DownloadResponse(results=results, success_count=success_count, failed_count=failed_count)
  • Pydantic models defining the output schema for download_files tool: DownloadResult for individual files and DownloadResponse aggregating multiple results. Input schema defined via Annotated parameters in the handler.
    class DownloadResult(BaseModel):
        """Download result model with file information"""
    
        file_path: str = Field(..., description="Full path where the file was saved")
        file_name: str = Field(..., description="Name of the downloaded file")
        file_size: int = Field(..., description="Size of the downloaded file in bytes")
        content_type: str | None = Field(None, description="MIME type of the downloaded file")
        success: bool = Field(..., description="Whether the download was successful")
        error: str | None = Field(None, description="Error message if download failed")
    
    
    class DownloadResponse(BaseModel):
        """Response model for download operations"""
    
        results: list[DownloadResult] = Field(..., description="List of download results")
        success_count: int = Field(..., description="Number of successful downloads")
        failed_count: int = Field(..., description="Number of failed downloads")
  • Initialization of the FastMCP server instance. Tool functions decorated with @mcp.tool are automatically registered here.
    mcp = FastMCP("download-server", instructions=DESCRIPTION)
  • Core helper function performing single file download with HTTP client, size limits, MIME validation, filename sanitization, SSRF protection, and error handling. Used by both download_files and download_single_file tools.
    async def _download_single_file_internal(
        url: str,
        output_dir: str,
        filename: str | None,
        timeout: int,
        max_size_mb: int,
    ) -> DownloadResult:
        """Internal async function to download a single file.
    
        Args:
            url: URL to download from
            output_dir: Directory to save file
            filename: Optional custom filename
            timeout: Download timeout in seconds
            max_size_mb: Maximum file size in MB
    
        Returns:
            DownloadResult with download information
        """
        file_path = None
        try:
            # Validate URL for SSRF
            _validate_url_safe(url)
    
            # Validate and resolve output directory
            output_path = _validate_output_dir(output_dir)
            output_path.mkdir(parents=True, exist_ok=True)
    
            # Determine filename
            if not filename:
                filename = _extract_filename_from_url(url)
            else:
                filename = _sanitize_filename(filename)
    
            # Get unique filepath to avoid collisions
            file_path = _get_unique_filepath(output_path / filename)
            final_filename = file_path.name
    
            max_size_bytes = max_size_mb * 1024 * 1024
    
            # Headers for better compatibility
            headers = {
                "User-Agent": (
                    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
                    "AppleWebKit/537.36 (KHTML, like Gecko) "
                    "Chrome/120.0.0.0 Safari/537.36"
                ),
                "Accept": "*/*",
                "Accept-Language": "en-US,en;q=0.9",
                "Accept-Encoding": "gzip, deflate, br",
                "Connection": "keep-alive",
            }
    
            async with httpx.AsyncClient(timeout=timeout, follow_redirects=True) as client:
                # First, do a HEAD request to check size
                try:
                    head_response = await client.head(url, headers=headers)
                    content_length = head_response.headers.get("Content-Length")
    
                    if content_length:
                        size = int(content_length)
                        if size > max_size_bytes:
                            size_mb = size / (1024 * 1024)
                            raise ValueError(
                                f"File size ({size_mb:.2f} MB) exceeds "
                                f"maximum allowed size ({max_size_mb} MB)"
                            )
                except httpx.HTTPStatusError:
                    # HEAD request not supported, continue with GET
                    pass
    
                # Download the file
                async with client.stream("GET", url, headers=headers) as response:
                    response.raise_for_status()
    
                    content_type = response.headers.get("Content-Type", "").split(";")[0]
                    downloaded = 0
    
                    # Validate MIME type if present
                    if content_type and content_type not in ALLOWED_CONTENT_TYPES:
                        raise ValueError(f"File type not allowed: {content_type}")
    
                    # Write to file
                    with open(file_path, "wb") as f:
                        async for chunk in response.aiter_bytes(chunk_size=8192):
                            downloaded += len(chunk)
    
                            # Check size during download
                            if downloaded > max_size_bytes:
                                # Delete partial file
                                if file_path.exists():
                                    file_path.unlink()
                                size_mb = downloaded / (1024 * 1024)
                                raise ValueError(
                                    f"File exceeded size limit during download "
                                    f"({size_mb:.2f} MB > {max_size_mb} MB)"
                                )
    
                            f.write(chunk)
    
                    # Verify file was created
                    if not file_path.exists():
                        raise ValueError("File was not created")
    
                    actual_size = file_path.stat().st_size
    
                    return DownloadResult(
                        file_path=str(file_path),
                        file_name=final_filename,
                        file_size=actual_size,
                        content_type=content_type,
                        success=True,
                        error=None,
                    )
    
        except Exception as e:
            # Clean up partial file if exists
            if file_path and file_path.exists():
                try:
                    file_path.unlink()
                except Exception:
                    pass  # Best effort cleanup
    
            return DownloadResult(
                file_path="",
                file_name=filename or "",
                file_size=0,
                content_type=None,
                success=False,
                error=_sanitize_error(e),
            )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions saving to the local filesystem, which implies a write operation, but lacks details on permissions, error handling, rate limits, or what happens if files already exist. For a tool with no annotations and potential side effects, this is inadequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values) and full schema coverage for parameters, the description's minimalism is partially acceptable. However, for a tool that performs downloads and file system writes with no annotations, more context on behavior and usage would improve completeness, leaving it at an adequate but basic level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds no additional meaning beyond what's in the schema, such as explaining parameter interactions or providing examples. Baseline 3 is appropriate when the schema handles all parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('download multiple files') and resource ('from URLs'), and distinguishes from the sibling tool 'download_single_file' by specifying 'multiple files'. However, it doesn't explicitly mention the sibling tool name or contrast their purposes directly, keeping it at a 4 rather than a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'download_single_file', nor does it mention any prerequisites, constraints, or typical use cases. It states what the tool does but not when it's appropriate, resulting in minimal usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dmitryglhf/url-download-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server