Skip to main content
Glama

download_single_file

Download individual files from web URLs to your local system with optional custom filenames and download size limits.

Instructions

Download a single file from URL with optional custom filename.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filenameNoCustom filename (optional)
max_size_mbNoMaximum file size in MB (default: 500)
output_dirNoDirectory to save the file
timeoutNoDownload timeout in seconds
urlYesURL of the file to download

Implementation Reference

  • Registration of the download_single_file tool using the @mcp.tool decorator.
    @mcp.tool(description="Download a single file from URL with optional custom filename.")
  • The handler function for the 'download_single_file' tool. Validates input parameters via Pydantic annotations and delegates to the internal download implementation.
    async def download_single_file( url: Annotated[str, Field(description="URL of the file to download")], output_dir: Annotated[str | None, Field(description="Directory to save the file")] = None, filename: Annotated[str | None, Field(description="Custom filename (optional)")] = None, timeout: Annotated[int, Field(description="Download timeout in seconds", ge=1, le=300)] = 60, max_size_mb: Annotated[ int, Field(description="Maximum file size in MB (default: 500)", ge=1, le=5000) ] = MAX_FILE_SIZE_MB, ) -> DownloadResult: """Download a single file from URL and save to the local filesystem. Args: url: URL of the file to download output_dir: Directory to save the file (defaults to ~/Downloads/mcp_downloads) filename: Custom filename (if not provided, extracted from URL) timeout: Download timeout in seconds (1-300) max_size_mb: Maximum file size in MB (1-5000) Returns: DownloadResult with download information """ if output_dir is None: output_dir = str(DEFAULT_DOWNLOAD_DIR) return await _download_single_file_internal(url, output_dir, filename, timeout, max_size_mb)
  • Pydantic BaseModel defining the output schema for the download_single_file tool.
    class DownloadResult(BaseModel): """Download result model with file information""" file_path: str = Field(..., description="Full path where the file was saved") file_name: str = Field(..., description="Name of the downloaded file") file_size: int = Field(..., description="Size of the downloaded file in bytes") content_type: str | None = Field(None, description="MIME type of the downloaded file") success: bool = Field(..., description="Whether the download was successful") error: str | None = Field(None, description="Error message if download failed")
  • Core helper function implementing the actual download logic: SSRF protection, file validation, async HTTP download, size limits, sanitization, and error handling.
    async def _download_single_file_internal( url: str, output_dir: str, filename: str | None, timeout: int, max_size_mb: int, ) -> DownloadResult: """Internal async function to download a single file. Args: url: URL to download from output_dir: Directory to save file filename: Optional custom filename timeout: Download timeout in seconds max_size_mb: Maximum file size in MB Returns: DownloadResult with download information """ file_path = None try: # Validate URL for SSRF _validate_url_safe(url) # Validate and resolve output directory output_path = _validate_output_dir(output_dir) output_path.mkdir(parents=True, exist_ok=True) # Determine filename if not filename: filename = _extract_filename_from_url(url) else: filename = _sanitize_filename(filename) # Get unique filepath to avoid collisions file_path = _get_unique_filepath(output_path / filename) final_filename = file_path.name max_size_bytes = max_size_mb * 1024 * 1024 # Headers for better compatibility headers = { "User-Agent": ( "Mozilla/5.0 (Windows NT 10.0; Win64; x64) " "AppleWebKit/537.36 (KHTML, like Gecko) " "Chrome/120.0.0.0 Safari/537.36" ), "Accept": "*/*", "Accept-Language": "en-US,en;q=0.9", "Accept-Encoding": "gzip, deflate, br", "Connection": "keep-alive", } async with httpx.AsyncClient(timeout=timeout, follow_redirects=True) as client: # First, do a HEAD request to check size try: head_response = await client.head(url, headers=headers) content_length = head_response.headers.get("Content-Length") if content_length: size = int(content_length) if size > max_size_bytes: size_mb = size / (1024 * 1024) raise ValueError( f"File size ({size_mb:.2f} MB) exceeds " f"maximum allowed size ({max_size_mb} MB)" ) except httpx.HTTPStatusError: # HEAD request not supported, continue with GET pass # Download the file async with client.stream("GET", url, headers=headers) as response: response.raise_for_status() content_type = response.headers.get("Content-Type", "").split(";")[0] downloaded = 0 # Validate MIME type if present if content_type and content_type not in ALLOWED_CONTENT_TYPES: raise ValueError(f"File type not allowed: {content_type}") # Write to file with open(file_path, "wb") as f: async for chunk in response.aiter_bytes(chunk_size=8192): downloaded += len(chunk) # Check size during download if downloaded > max_size_bytes: # Delete partial file if file_path.exists(): file_path.unlink() size_mb = downloaded / (1024 * 1024) raise ValueError( f"File exceeded size limit during download " f"({size_mb:.2f} MB > {max_size_mb} MB)" ) f.write(chunk) # Verify file was created if not file_path.exists(): raise ValueError("File was not created") actual_size = file_path.stat().st_size return DownloadResult( file_path=str(file_path), file_name=final_filename, file_size=actual_size, content_type=content_type, success=True, error=None, ) except Exception as e: # Clean up partial file if exists if file_path and file_path.exists(): try: file_path.unlink() except Exception: pass # Best effort cleanup return DownloadResult( file_path="", file_name=filename or "", file_size=0, content_type=None, success=False, error=_sanitize_error(e), )

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dmitryglhf/url-download-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server