Skip to main content
Glama

Server Details

Radio MCP — wraps Radio Browser API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-radio
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.2/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_top_stations retrieves popular stations, list_countries provides country data, list_tags shows genres/tags, and search_stations finds stations by name. There is no overlap in functionality, making tool selection straightforward for an agent.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern (get_top_stations, list_countries, list_tags, search_stations), using snake_case throughout. The naming is predictable and enhances readability across the tool set.

Tool Count5/5

With 4 tools, this server is well-scoped for a radio station directory. Each tool serves a unique purpose (popularity, geography, genres, search), and there are no extraneous or missing tools for the apparent domain.

Completeness4/5

The tools cover core discovery and listing operations for radio stations, including popularity, country, genre, and search. A minor gap exists in CRUD operations (e.g., no create/update/delete for stations or votes), but this is reasonable for a read-only directory, and agents can work effectively with the provided tools.

Available Tools

4 tools
get_top_stationsBInspect

Get the most popular radio stations by vote count, optionally filtered by country.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoNumber of stations to return. Defaults to 10.
countryNoFilter by country name (e.g. "Germany", "United States"). Omit for global results.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool retrieves data ('Get') but doesn't specify if it's read-only, requires authentication, has rate limits, or describes the return format (e.g., list structure, pagination). For a tool with no annotation coverage, this leaves significant behavioral traits unexplained.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes the optional filter. There is no wasted language, and every part earns its place by conveying essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the basic purpose and optional filtering, but without annotations or output schema, it lacks details on behavioral traits (e.g., safety, response format) that would help an agent use it correctly. This meets minimum viability with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the input schema already fully documents the parameters ('count' and 'country'). The description adds minimal value beyond the schema by hinting at the optional country filter, but it doesn't provide additional semantics like format examples beyond 'Germany' or clarify how popularity is calculated. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the most popular radio stations by vote count' specifies the action (get) and resource (radio stations) with a popularity metric (vote count). It distinguishes from siblings like 'search_stations' by focusing on popularity ranking rather than general search. However, it doesn't explicitly contrast with 'list_countries' or 'list_tags', keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage through the optional country filter ('optionally filtered by country'), suggesting it can be used for global or country-specific queries. It doesn't explicitly state when to use this tool versus alternatives like 'search_stations' for non-popularity-based searches, nor does it mention prerequisites or exclusions, leaving some guidance gaps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_countriesBInspect

List countries that have radio stations, with station counts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool lists countries with station counts, but it doesn't describe key behavioral traits such as whether the list is paginated, sorted, or limited in scope, or if there are any rate limits or authentication requirements. This is a significant gap for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence: 'List countries that have radio stations, with station counts.' It is front-loaded with the core purpose and includes no unnecessary words, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple listing with no parameters) and the lack of annotations and output schema, the description is minimally adequate. It states what the tool does but misses details like output format, pagination, or error handling. With no output schema, the description should ideally explain return values more fully, but it provides a basic overview.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description adds value by implying the output includes station counts, which is useful semantic information beyond the empty schema. However, it doesn't detail any optional parameters or filtering options, so it's not a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List countries that have radio stations, with station counts.' It specifies the verb ('list'), resource ('countries'), and includes the additional detail of providing station counts. However, it doesn't explicitly differentiate from sibling tools like 'search_stations' or 'list_tags,' which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'search_stations' (which might filter stations) or 'list_tags' (which might list tags instead of countries), nor does it specify any context or exclusions for usage. This leaves the agent without clear direction on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tagsBInspect

List the most common radio station genres and tags by station count.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tags to return. Defaults to 20.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes a read-only listing operation, which is clear, but lacks details on permissions, rate limits, pagination, or error handling. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It's appropriately sized for a simple listing tool and earns its place by clearly stating what the tool does.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks guidance on usage versus siblings and behavioral details like output format or error cases, which would be helpful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'limit' parameter fully documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, such as format constraints or examples. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List the most common radio station genres and tags by station count.' It specifies the verb ('List'), resource ('radio station genres and tags'), and scope ('by station count'). However, it doesn't explicitly differentiate from sibling tools like 'get_top_stations' or 'search_stations', which might also involve station data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_top_stations' (which might list stations directly) or 'search_stations' (which might filter stations), leaving the agent to infer usage based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_stationsBInspect

Search for radio stations by name. Results are ordered by votes (popularity).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return. Defaults to 10.
queryYesStation name to search for.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It adds useful context beyond the input schema: it specifies that 'Results are ordered by votes (popularity),' which informs the agent about sorting behavior. However, it lacks details on other behavioral traits such as rate limits, authentication needs, error handling, or what the output looks like (e.g., format, pagination). For a search tool with no annotations, this is a moderate but incomplete disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it's two concise sentences that directly state the purpose and key behavioral trait (ordering by votes). Every sentence earns its place by providing essential information without redundancy or fluff, making it efficient and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (a search function with 2 parameters), no annotations, and no output schema, the description is partially complete. It covers the purpose and sorting behavior but misses details like output format, error cases, or usage context relative to siblings. Without annotations or output schema, more information would be helpful for the agent to fully understand the tool's behavior and integration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, meaning the input schema fully documents both parameters ('query' and 'limit') with descriptions. The description adds no additional meaning beyond the schema; it doesn't explain parameter interactions, constraints, or usage examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search for radio stations by name.' It specifies the verb ('search') and resource ('radio stations'), and distinguishes it from siblings like 'get_top_stations' (which likely returns top stations without search) and 'list_countries'/'list_tags' (which list metadata). However, it doesn't explicitly differentiate from potential siblings like 'search_stations_by_genre' or 'search_stations_by_country', so it's not fully specific to all alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose 'search_stations' over 'get_top_stations' (e.g., for popularity-based results vs. search-based), 'list_countries' (e.g., for filtering by country), or 'list_tags' (e.g., for genre-based searches). There's only an implied usage based on the purpose, with no explicit context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.