Skip to main content
Glama
ajragusa

perfsonar-mcp

by ajragusa

get_latency

Measure network latency between source and destination hosts to identify performance issues and optimize connectivity using historical delay data.

Instructions

Get latency/delay measurements between source and destination.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sourceYesSource host/IP address
destinationYesDestination host/IP address
timeRangeNoTime range in seconds
summaryWindowNoSummary window in seconds

Implementation Reference

  • The FastMCP tool definition for `get_latency` which serves as the entry point for MCP requests.
    async def get_latency(
        source: str,
        destination: str,
        timeRange: int = 86400,
        summaryWindow: Optional[int] = None,
    ) -> str:
        """Get latency/delay measurements between source and destination.
    
        Args:
            source: Source host/IP address
            destination: Destination host/IP address
            timeRange: Time range in seconds (default: 86400 = 24 hours)
            summaryWindow: Summary window in seconds for aggregation
    
        Returns:
            JSON string with latency measurement data
        """
        results = await perfsonar_client.get_latency(source, destination, timeRange, summaryWindow)
        return json.dumps([r.model_dump(by_alias=True) for r in results], indent=2)
  • The actual logic implementation of `get_latency` within the `PerfSONARClient` class.
    async def get_latency(
        self,
        source: str,
        destination: str,
        time_range: Optional[int] = None,
        summary_window: Optional[int] = None,
    ) -> List[MeasurementResult]:
        """
        Get latency/delay measurements between source and destination
    
        Args:
            source: Source host/IP address
            destination: Destination host/IP address
            time_range: Time range in seconds from now
            summary_window: Summary window in seconds
    
        Returns:
            List of measurement results
        """
        logger.info(f"Getting latency: {source} -> {destination}")
        # Try histogram-owdelay first, fall back to histogram-rtt
        metadata = await self.query_measurements(
            MeasurementQueryParams(
                source=source, destination=destination, event_type="histogram-owdelay"
            )
        )
    
        if not metadata:
            logger.debug("No histogram-owdelay data, trying histogram-rtt")
            metadata = await self.query_measurements(
                MeasurementQueryParams(
                    source=source, destination=destination, event_type="histogram-rtt"
                )
            )
    
        results = []
        for meta in metadata:
            event_types = ["histogram-owdelay", "histogram-rtt"]
            for event_type_name in event_types:
                event_type = next(
                    (e for e in meta.event_types if e.event_type == event_type_name), None
                )
                if not event_type:
                    continue
    
                data = await self.get_measurement_data(
                    MeasurementDataParams(
                        metadata_key=meta.metadata_key,
                        event_type=event_type_name,
                        summary_type="statistics" if summary_window else None,
                        summary_window=summary_window,
                        time_range=time_range,
                    )
                )
    
                results.append(MeasurementResult(metadata=meta, data=data))
                break
    
        logger.info(f"Retrieved {len(results)} latency results")
        return results
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'Get[s]' measurements, implying a read operation, but lacks details on permissions, rate limits, whether it returns real-time or historical data, or any side effects. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It is front-loaded with the core purpose, making it easy to understand quickly. Every part of the sentence contributes directly to explaining the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of latency measurement tools and the lack of annotations and output schema, the description is incomplete. It does not cover behavioral aspects like data freshness, error handling, or return format, leaving gaps that could hinder an AI agent's ability to use the tool effectively in context with siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds minimal value beyond the schema by implying the parameters relate to source and destination for measurements, but does not provide additional context like format examples or usage tips. Baseline 3 is appropriate when the schema handles most documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('latency/delay measurements'), specifying what is being measured ('between source and destination'). It distinguishes from some siblings like 'get_packet_loss' or 'get_throughput' by focusing on latency, but could be more explicit about differentiation from 'schedule_latency_test'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, context, or exclusions, such as how it differs from 'schedule_latency_test' (which might schedule a test rather than retrieve measurements) or 'get_measurement_data' (which could be more general).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ajragusa/perfsonar-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server