Skip to main content
Glama
duksh

PeerGlass

by duksh

rir_peering_info

Read-onlyIdempotent

Fetch peering policy, IXP presence, NOC contacts, and BGP neighbors for any ASN using PeeringDB and RIPE Stat data sources.

Instructions

Fetch peering policy, IXP presence, NOC contacts, and BGP neighbours for an ASN.

Data sources (queried in parallel):

  • PeeringDB (www.peeringdb.com) — the internet's peering registry

  • RIPE Stat asn-neighbours — live BGP upstream/downstream relationships

Information returned:

  • Peering policy: Open / Selective / Restrictive / No Peering

  • IRR AS-SET (used in route filters, e.g. AS-CLOUDFLARE)

  • NOC email, abuse email, peering contact email

  • IXP presence: which exchange points, peering IPs, link speed

  • BGP neighbours: up to 30 adjacent ASNs in the routing table

Use cases:

  • "Does Cloudflare have an Open peering policy?"

  • "Which IXPs is AS13335 present at?"

  • "What is the NOC email for AS1234 to report an incident?"

  • "Is this ASN a residential ISP or a CDN?"

Results are cached for 6 hours.

Args: params (PeeringInfoInput): - asn (str): ASN to look up (e.g. 'AS13335', '13335', 'AS-CLOUDFLARE') - response_format (str): 'markdown' (default) or 'json'

Returns: str: Peering policy table, IXP presence table, contacts, BGP neighbours. JSON schema: { "asn": str, "network_name": str, "policy_general": str, "noc_email": str, "irr_as_set": str, "ixp_presence": [{"name": str, "city": str, "country": str, "ipaddr4": str, "ipaddr6": str, "speed": int}], "neighbour_asns": [str], "errors": [str] }

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the safety annotations (readOnly/idempotent), the description adds critical behavioral context: the 6-hour cache duration, parallel querying of PeeringDB and RIPE Stat, and the 'up to 30 adjacent ASNs' result limit. It does not mention authentication requirements or rate limits, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is densely packed but well-structured with clear sections (Data sources, Information returned, Use cases, Args, Returns). Every sentence earns its place by providing specific technical details. It is slightly verbose but appropriately so for the complexity of the data returned.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple data sources, nested output objects), the description is complete. It documents the return value's JSON schema explicitly, explains all output fields in the 'Information returned' section, and notes caching behavior. The output schema presence means return values don't need exhaustive prose explanation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage at the top level (the 'params' object lacks a description), the Args section fully compensates by documenting both parameters with clear types, examples ('AS13335', '13335'), and allowed values ('markdown' or 'json'). This effectively bridges the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise verb ('Fetch') and enumerates specific resources (peering policy, IXP presence, NOC contacts, BGP neighbours) scoped to an ASN. It clearly differentiates from siblings like `rir_query_asn` or `rir_as_relationships` by specifying the exact PeeringDB and RIPE Stat data sources used.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'Use cases' section provides four concrete scenarios (e.g., 'Does Cloudflare have an Open peering policy?') that clearly establish when to invoke this tool. However, it lacks explicit guidance on when NOT to use it or which sibling tools (like `rir_query_asn`) should be preferred for simpler lookups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/duksh/peerglass'

If you have feedback or need assistance with the MCP directory API, please join our Discord server