Skip to main content
Glama

compare_services

Compare reliability ratings across multiple services to identify providers with the best track record for uptime, latency, and real-world reliability.

Instructions

Compare reliability ratings across multiple services side by side. Like reading comparative reviews — see which provider has the best track record for uptime, latency, and real-world reliability right now.

Returns services ranked by the chosen metric with a recommendation and the reasoning behind it.

Args: services: List of service slugs to compare (max 10) sort_by: Metric to sort by — 'overall', 'uptime', 'latency', 'reliability'

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
servicesYes
sort_byNooverall

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The 'compare_services' tool implementation, which fetches comparative reliability ratings from the Preflight API.
    @mcp.tool
    async def compare_services(
        services: list[str],
        sort_by: str = "overall",
    ) -> dict:
        """Compare reliability ratings across multiple services side by side.
        Like reading comparative reviews — see which provider has the best track
        record for uptime, latency, and real-world reliability right now.
    
        Returns services ranked by the chosen metric with a recommendation
        and the reasoning behind it.
    
        Args:
            services: List of service slugs to compare (max 10)
            sort_by: Metric to sort by — 'overall', 'uptime', 'latency', 'reliability'
        """
    
        try:
            async with httpx.AsyncClient(timeout=10) as client:
                resp = await client.get(
                    f"{PREFLIGHT_API}/v1/compare",
                    params={"services": ",".join(services), "sort_by": sort_by},
                    headers=_headers(),
                )
    
                resp.raise_for_status()
    
                return resp.json()
        except httpx.HTTPStatusError as exc:
            return {
                "error": True,
                "status_code": exc.response.status_code,
                "detail": exc.response.text,
            }
        except (httpx.ConnectError, httpx.TimeoutException) as exc:
            return {
                "error": True,
                "status_code": None,
                "detail": f"Connection failed: {exc}",
            }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what the tool does (comparative analysis with ranking and recommendation) and output behavior, but lacks details on permissions, rate limits, data freshness, or error handling. The description adds value by explaining the ranking and recommendation aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by output behavior and parameter details. Every sentence earns its place with no wasted words, and the Args section is clearly separated for quick reference.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, comparative analysis), no annotations, but with an output schema present, the description is mostly complete. It explains the purpose, output (ranking with recommendation and reasoning), and parameters well, but could benefit from more behavioral context like data sources or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully compensate. It provides clear semantic explanations for both parameters: 'services' as 'List of service slugs to compare (max 10)' and 'sort_by' as 'Metric to sort by' with enumerated values. This adds essential meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('compare reliability ratings', 'see which provider has the best track record') and resources ('multiple services', 'uptime, latency, and real-world reliability'). It distinguishes from sibling tools by focusing on comparative analysis rather than individual checks or reporting outcomes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('compare reliability ratings across multiple services side by side'), but does not explicitly state when not to use it or name alternatives among sibling tools (check_reliability, report_outcome). The comparative nature is implied to differentiate it from single-service checks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gsmethells/preflight-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server