Skip to main content
Glama
inventer-dev

mcp-internet-speed-test

measure_jitter

Measures jitter, the variation in network latency, by taking multiple samples to a specified URL.

Instructions

Jitter is the variation in latency, so we need multiple measurements.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlNohttps://httpi.dev/get
samplesNo

Implementation Reference

  • The measure_jitter tool handler: measures jitter (variation in latency) by taking multiple HTTP GET samples, calculating average latency, and computing jitter as the average absolute deviation from the mean. Returns jitter, unit, average_latency, samples, url, and server_info.
    @mcp.tool(icons=[ICON_JITTER])
    async def measure_jitter(
        url: str = DEFAULT_LATENCY_URL,
        samples: int = 5,
        context: Context[ServerSession, None] = None,
    ) -> dict:
        """Jitter is the variation in latency, so we need multiple measurements."""
        latency_values = []
        server_info = None
    
        await safe_log_info(context, f"Starting jitter measurement with {samples} samples...")
    
        async with httpx.AsyncClient() as client:
            for sample_index in range(samples):
                message = f"Sample {sample_index + 1}/{samples}"
                await safe_report_progress(context, sample_index + 1, samples, message)
                start = time.time()
                response = await client.get(url)
                end = time.time()
                latency_values.append((end - start) * 1000)  # Convert to milliseconds
    
                # Extract server info from the first response
                if sample_index == 0:
                    server_info = extract_server_info(dict(response.headers))
    
        # Calculate average latency
        avg_latency = sum(latency_values) / len(latency_values)
    
        # Calculate jitter (average deviation from the mean)
        jitter = sum(abs(latency - avg_latency) for latency in latency_values) / len(
            latency_values,
        )
    
        return {
            "jitter": round(jitter, 2),
            "unit": "ms",
            "average_latency": round(avg_latency, 2),
            "samples": samples,
            "url": url,
            "server_info": server_info,
        }
  • Registration of measure_jitter as an MCP tool with ICON_JITTER via @mcp.tool decorator.
    @mcp.tool(icons=[ICON_JITTER])
  • Input schema for measure_jitter: url (str, default DEFAULT_LATENCY_URL), samples (int, default 5), context (optional Context). Returns dict with jitter, unit, average_latency, samples, url, server_info.
    @mcp.tool(icons=[ICON_JITTER])
    async def measure_jitter(
        url: str = DEFAULT_LATENCY_URL,
        samples: int = 5,
        context: Context[ServerSession, None] = None,
    ) -> dict:
  • ICON_JITTER icon definition used by the tool's decorator.
    ICON_JITTER = Icon(src=_SVG_TPL.format("📊"), mimeType="image/svg+xml")
  • Call site where measure_jitter is invoked within the run_complete_speed_test tool with url_latency, 5 samples, and context.
    jitter_result = await measure_jitter(url_latency, 5, context)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must convey behavioral traits. It mentions 'multiple measurements,' but fails to disclose side effects, system impact, or any constraints beyond the name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, but it is not concise in a useful way—it provides a definition of jitter rather than a tool description. It under-specifies the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and two parameters with defaults, the description should provide usage context. It is completely inadequate, offering no help for a correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not explain any parameters (url, samples). It adds no semantic value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description defines jitter but does not explicitly state that the tool measures jitter. It says 'we need multiple measurements,' which implies the tool's function but remains vague and indirect.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus its siblings (e.g., measure_latency). The description lacks context for appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/inventer-dev/mcp-internet-speed-test'

If you have feedback or need assistance with the MCP directory API, please join our Discord server