Skip to main content
Glama
inventer-dev

mcp-internet-speed-test

measure_download_speed

Measure download speed using incremental file sizes with adjustable maximum size and sustain duration. Returns speed test results.

Instructions

Measure download speed using incremental file sizes.

Args:
    size_limit: Maximum file size to test (default: 128MB)
    sustain_time: Duration in seconds for each test (1-8, default: 8)

Returns:
    Dictionary with download speed results

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
size_limitNo128MB
sustain_timeNo

Implementation Reference

  • Tool registered with FastMCP using the @mcp.tool decorator with a download icon
    @mcp.tool(icons=[ICON_DOWNLOAD])
    async def measure_download_speed(
        size_limit: str = "128MB",
        sustain_time: int = 8,
        context: Context[ServerSession, None] = None,
    ) -> dict:
  • Main handler function that measures download speed by iterating through incremental file sizes (128KB to 128MB) and streaming downloads via httpx, returning speed in Mbps
    async def measure_download_speed(
        size_limit: str = "128MB",
        sustain_time: int = 8,
        context: Context[ServerSession, None] = None,
    ) -> dict:
        """
        Measure download speed using incremental file sizes.
    
        Args:
            size_limit: Maximum file size to test (default: 128MB)
            sustain_time: Duration in seconds for each test (1-8, default: 8)
    
        Returns:
            Dictionary with download speed results
        """
        # Validate sustain_time
        sustain_time = max(MIN_TEST_DURATION, min(MAX_TEST_DURATION, float(sustain_time)))
        results = []
        final_result = None
    
        # Find the index of the size limit in our progression
        max_index = (
            SIZE_PROGRESSION.index(size_limit)
            if size_limit in SIZE_PROGRESSION
            else len(SIZE_PROGRESSION) - 1
        )
    
        total_steps = max_index + 1
        current_step = 0
    
        await safe_log_info(context, "Starting download speed test...")
    
        # Test each file size in order, up to the specified limit
        # Methodology: download each size, if it takes < 8s move to next,
        # if it takes >= 8s use that result as final and stop.
        async with httpx.AsyncClient() as client:
            for size_key in SIZE_PROGRESSION[: max_index + 1]:
                current_step += 1
                progress_message = f"Testing {size_key} file..."
                await safe_report_progress(context, current_step, total_steps, progress_message)
    
                url = DEFAULT_DOWNLOAD_URLS[size_key]
                start = time.time()
                total_size = 0
                current_result = None
    
                async with client.stream(
                    "GET",
                    url,
                ) as response:
                    # Extract server information from headers
                    server_info = extract_server_info(dict(response.headers))
    
                    async for chunk in response.aiter_bytes(chunk_size=1024):
                        if chunk:
                            total_size += len(chunk)
    
                            # Check elapsed time during download
                            elapsed_time = time.time() - start
    
                            # Update current result continuously
                            speed_mbps = ((total_size * 8) / (1024 * 1024)) / elapsed_time
                            current_result = {
                                "download_speed": round(speed_mbps, 2),
                                "elapsed_time": round(elapsed_time, 2),
                                "data_size": total_size,
                                "size": size_key,
                                "url": url,
                                "server_info": server_info,
                            }
    
                            # If test duration exceeded, stop streaming this file
                            if elapsed_time >= sustain_time:
                                break
    
                # Record final measurement for this file size
                if current_result is None:
                    elapsed_time = time.time() - start
                    current_result = {
                        "download_speed": 0,
                        "elapsed_time": round(elapsed_time, 2),
                        "data_size": total_size,
                        "size": size_key,
                        "url": url,
                        "server_info": server_info if total_size > 0 else None,
                    }
    
                results.append(current_result)
                final_result = current_result
    
                # If this download took >= sustain_time, we found our measurement
                if current_result["elapsed_time"] >= sustain_time:
                    break
    
        # Return the final result or an error if all tests failed
        if final_result:
            return {
                "download_speed": final_result["download_speed"],
                "unit": "Mbps",
                "elapsed_time": final_result["elapsed_time"],
                "data_size": final_result["data_size"],
                "size_used": final_result["size"],
                "server_info": final_result["server_info"],
                "all_tests": results,
            }
        return {
            "error": True,
            "message": "All download tests failed",
            "details": results,
        }
  • Input schema: size_limit (str, default '128MB') and sustain_time (int, 1-8, default 8)
    async def measure_download_speed(
        size_limit: str = "128MB",
        sustain_time: int = 8,
        context: Context[ServerSession, None] = None,
    ) -> dict:
  • Map of size keys (128KB to 128MB) to GitHub media download URLs used by the download speed tool
    DEFAULT_DOWNLOAD_URLS = {
        "128KB": f"{GITHUB_MEDIA_URL}/128KB.bin",
        "256KB": f"{GITHUB_MEDIA_URL}/256KB.bin",
        "512KB": f"{GITHUB_MEDIA_URL}/512KB.bin",
        "1MB": f"{GITHUB_MEDIA_URL}/1MB.bin",
        "2MB": f"{GITHUB_MEDIA_URL}/2MB.bin",
        "4MB": f"{GITHUB_MEDIA_URL}/4MB.bin",
        "8MB": f"{GITHUB_MEDIA_URL}/8MB.bin",
        "16MB": f"{GITHUB_MEDIA_URL}/16MB.bin",
        "32MB": f"{GITHUB_MEDIA_URL}/32MB.bin",
        "64MB": f"{GITHUB_MEDIA_URL}/64MB.bin",
        "128MB": f"{GITHUB_MEDIA_URL}/128MB.bin",
    }
  • Ordered list of size keys used to progressively test larger file sizes during download speed measurement
    SIZE_PROGRESSION = [
        "128KB",
        "256KB",
        "512KB",
        "1MB",
        "2MB",
        "4MB",
        "8MB",
        "16MB",
        "32MB",
        "64MB",
        "128MB",
    ]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so description must disclose behavioral traits. It mentions 'incremental file sizes' implying bandwidth usage but does not explicitly state whether the tool is safe, destructive, or has rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Docstring format with Args and Returns sections is structured but the Returns description is vague. Could be more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With two optional parameters and no output schema, the description covers parameter semantics adequately but the return value is underspecified. For a simple tool, it is somewhat complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Explains size_limit as maximum file size and sustain_time as duration per test (1-8 seconds). The schema has 0% description coverage, so this adds essential meaning beyond types and defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool measures download speed using incremental file sizes, distinguishing it from siblings like measure_latency and measure_upload_speed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Sibling tools exist but no comparative information is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/inventer-dev/mcp-internet-speed-test'

If you have feedback or need assistance with the MCP directory API, please join our Discord server