Skip to main content
Glama

check_reliability

Check reliability ratings for APIs and MCP servers using independent synthetic probes and crowdsourced telemetry to assess uptime, latency, and error rates before committing to a service.

Instructions

Look up the independent reliability rating for an API or MCP server. Like checking a restaurant's reviews before booking — see the real uptime, latency, error rate, and community experience before you commit to a service.

Returns a trust score (0-100), current operational status, trend direction, and any known issues. Scores are based on independent synthetic probes and crowdsourced telemetry from real agent traffic — not vendor self-reporting.

Args: service: Service slug (e.g., 'stripe-mcp', 'openai-api') or partial name metrics: Optional list of specific metrics to include: 'uptime', 'latency', 'reliability', 'maintenance', 'community'

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
serviceYes
metricsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler for the `check_reliability` tool, which queries the Preflight API for service reliability metrics.
    async def check_reliability(
        service: str,
        metrics: list[str] | None = None,
    ) -> dict:
        """Look up the independent reliability rating for an API or MCP server.
        Like checking a restaurant's reviews before booking — see the real uptime,
        latency, error rate, and community experience before you commit to a service.
    
        Returns a trust score (0-100), current operational status, trend direction,
        and any known issues. Scores are based on independent synthetic probes and
        crowdsourced telemetry from real agent traffic — not vendor self-reporting.
    
        Args:
            service: Service slug (e.g., 'stripe-mcp', 'openai-api') or partial name
            metrics: Optional list of specific metrics to include:
                     'uptime', 'latency', 'reliability', 'maintenance', 'community'
        """
    
        params = {"service": service}
    
        if metrics:
            params["metrics"] = ",".join(metrics)
    
        try:
            async with httpx.AsyncClient(timeout=10) as client:
                resp = await client.get(
                    f"{PREFLIGHT_API}/v1/score", params=params, headers=_headers()
                )
    
                resp.raise_for_status()
    
                return resp.json()
        except httpx.HTTPStatusError as exc:
            return {
                "error": True,
                "status_code": exc.response.status_code,
                "detail": exc.response.text,
            }
        except (httpx.ConnectError, httpx.TimeoutException) as exc:
            return {
                "error": True,
                "status_code": None,
                "detail": f"Connection failed: {exc}",
            }
  • Registration of the `check_reliability` tool using the @mcp.tool decorator.
    @mcp.tool
    async def check_reliability(
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does so well. It discloses key behavioral traits: the tool returns a trust score, operational status, trend, and known issues; explains that scores are based on independent probes and crowdsourced telemetry; and clarifies it's not vendor self-reporting. It doesn't mention rate limits or auth needs, but covers most critical aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by an analogy, then details on returns and data sources, and ends with parameter explanations. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, 0% schema coverage, but an output schema present, the description is complete enough. It covers purpose, usage, behavior, and parameters thoroughly, and since an output schema exists, it doesn't need to detail return values explicitly, making it well-rounded for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate, which it does effectively. It explains the 'service' parameter as a 'slug or partial name' with examples, and details the 'metrics' parameter as an optional list with specific metric options ('uptime', 'latency', etc.), adding meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Look up the independent reliability rating for an API or MCP server.' It specifies the verb ('look up') and resource ('reliability rating'), and distinguishes it from siblings by focusing on individual service checks rather than comparisons or reporting outcomes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'before you commit to a service' and 'see the real uptime, latency, error rate, and community experience.' It implies usage for pre-commitment evaluation but does not explicitly state when not to use it or name alternatives among siblings (e.g., compare_services for comparisons).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gsmethells/preflight-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server