radio
Server Details
Radio MCP — wraps Radio Browser API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-radio
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 9 of 9 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but 'ask_pipeworx' and 'discover_tools' could cause confusion as both involve finding information or tools, though their scopes differ (general queries vs. tool discovery). The radio-specific tools are clearly separated from memory and utility functions.
The naming is mixed with no single pattern: 'ask_pipeworx' and 'discover_tools' use verb_noun, while others like 'forget', 'recall', and 'remember' are single verbs, and radio tools use verb_noun (e.g., 'get_top_stations'). This inconsistency makes the set less predictable but still readable.
With 9 tools, the count is reasonable for a server combining radio functionality with memory and query utilities. It's slightly high but well-scoped, covering core operations without being overwhelming.
For the radio domain, tools cover discovery (search, top stations, tags, countries) but lack CRUD operations like creating or managing stations, which might be intentional. The memory tools (remember, recall, forget) and query utilities provide additional surface, but gaps exist in full lifecycle coverage for radio stations.
Available Tools
9 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the tool's behavior: 'Pipeworx picks the right tool, fills the arguments, and returns the result,' which adds context about automation and result delivery. However, it lacks details on error handling, response format, rate limits, or authentication needs, leaving gaps in behavioral understanding for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core functionality in the first sentence, followed by explanatory details and examples. Every sentence earns its place by clarifying usage, differentiating from alternatives, and providing concrete examples. It is appropriately sized, avoiding unnecessary verbosity while being informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (natural language querying with automated tool selection), the description is mostly complete. It explains the purpose, usage, and behavior adequately. However, without an output schema, it does not detail the return values or potential error responses, which could be helpful for an agent. The lack of annotations also means some behavioral aspects are underspecified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the 'question' parameter well-documented in the schema. The description adds value by emphasizing 'plain English' and 'natural language,' clarifying the expected input style beyond the schema's technical description. It also provides examples that illustrate parameter usage, enhancing semantic understanding without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('data source'), and distinguishes itself from sibling tools by emphasizing natural language processing rather than browsing specific tools or schemas. The examples further clarify the scope and differentiate it from tools like 'search_stations' or 'list_countries'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It contrasts with alternatives by implying that other tools might require browsing or schema knowledge, and provides clear examples of appropriate use cases like factual queries or data lookups, guiding users away from using it for unrelated tasks like station searches or country listings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the search functionality and return format (tools with names and descriptions), but doesn't mention performance characteristics, error conditions, or authentication requirements. The description adds some context about when to use it, but lacks details on rate limits, pagination, or what happens with no matches.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence explains what the tool does, and the second provides crucial usage guidance. There's zero waste or redundancy, and the most important information (when to use it) is appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with 2 parameters) and 100% schema coverage, the description is reasonably complete. It explains the purpose and provides excellent usage guidance. However, with no output schema and no annotations, it could benefit from mentioning what the return structure looks like or any limitations of the search algorithm.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions the general purpose ('search by describing what you need') but provides no additional syntax, format, or constraint details for the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search', 'returns') and resources ('Pipeworx tool catalog', 'most relevant tools with names and descriptions'). It distinguishes from sibling tools by focusing on tool discovery rather than listing specific entities like stations, countries, or tags.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear conditions for when to use this tool versus alternatives, including the threshold (500+ tools) and the specific scenario (finding the right tools for a task).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but only states the action ('Delete') without disclosing behavioral traits like whether deletion is permanent, requires specific permissions, or has side effects. It adds minimal context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It is front-loaded and appropriately sized for a simple tool, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a deletion tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral implications (e.g., permanence, error handling) and doesn't compensate for the absence of structured data, leaving gaps in understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter ('key'). The description adds no additional meaning beyond what the schema provides, such as format examples or constraints, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Delete') and resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the action is distinct enough to infer separation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description lacks context about prerequisites (e.g., needing an existing memory key) or exclusions, leaving the agent to infer usage from the action alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_top_stationsBInspect
Get trending radio stations ranked by popularity, optionally filtered by country (e.g., 'US', 'GB', 'DE'). Returns station URLs, genres, and vote counts.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of stations to return. Defaults to 10. | |
| country | No | Filter by country name (e.g. "Germany", "United States"). Omit for global results. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool retrieves data ('Get') but doesn't specify if it's read-only, requires authentication, has rate limits, or describes the return format (e.g., list structure, pagination). For a tool with no annotation coverage, this leaves significant behavioral traits unexplained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes the optional filter. There is no wasted language, and every part earns its place by conveying essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the basic purpose and optional filtering, but without annotations or output schema, it lacks details on behavioral traits (e.g., safety, response format) that would help an agent use it correctly. This meets minimum viability with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the input schema already fully documents the parameters ('count' and 'country'). The description adds minimal value beyond the schema by hinting at the optional country filter, but it doesn't provide additional semantics like format examples beyond 'Germany' or clarify how popularity is calculated. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the most popular radio stations by vote count' specifies the action (get) and resource (radio stations) with a popularity metric (vote count). It distinguishes from siblings like 'search_stations' by focusing on popularity ranking rather than general search. However, it doesn't explicitly contrast with 'list_countries' or 'list_tags', keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage through the optional country filter ('optionally filtered by country'), suggesting it can be used for global or country-specific queries. It doesn't explicitly state when to use this tool versus alternatives like 'search_stations' for non-popularity-based searches, nor does it mention prerequisites or exclusions, leaving some guidance gaps.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_countriesBInspect
Browse available countries with radio stations. Returns country names and station counts to help target your search geographically.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool lists countries with station counts, but it doesn't describe key behavioral traits such as whether the list is paginated, sorted, or limited in scope, or if there are any rate limits or authentication requirements. This is a significant gap for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'List countries that have radio stations, with station counts.' It is front-loaded with the core purpose and includes no unnecessary words, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple listing with no parameters) and the lack of annotations and output schema, the description is minimally adequate. It states what the tool does but misses details like output format, pagination, or error handling. With no output schema, the description should ideally explain return values more fully, but it provides a basic overview.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description adds value by implying the output includes station counts, which is useful semantic information beyond the empty schema. However, it doesn't detail any optional parameters or filtering options, so it's not a perfect score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List countries that have radio stations, with station counts.' It specifies the verb ('list'), resource ('countries'), and includes the additional detail of providing station counts. However, it doesn't explicitly differentiate from sibling tools like 'search_stations' or 'list_tags,' which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'search_stations' (which might filter stations) or 'list_tags' (which might list tags instead of countries), nor does it specify any context or exclusions for usage. This leaves the agent without clear direction on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_tagsBInspect
Discover radio genres and tags ranked by station count. Use to explore what categories are available before searching.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tags to return. Defaults to 20. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes a read-only listing operation, which is clear, but lacks details on permissions, rate limits, pagination, or error handling. For a tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It's appropriately sized for a simple listing tool and earns its place by clearly stating what the tool does.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one optional parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks guidance on usage versus siblings and behavioral details like output format or error cases, which would be helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'limit' parameter fully documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, such as format constraints or examples. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List the most common radio station genres and tags by station count.' It specifies the verb ('List'), resource ('radio station genres and tags'), and scope ('by station count'). However, it doesn't explicitly differentiate from sibling tools like 'get_top_stations' or 'search_stations', which might also involve station data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_top_stations' (which might list stations directly) or 'search_stations' (which might filter stations), leaving the agent to infer usage based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that memories can be retrieved from current or previous sessions, which is useful behavioral context. However, it doesn't mention potential limitations like memory persistence, retrieval failures, or performance characteristics. The description doesn't contradict any annotations (none exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with zero waste. The first sentence states the purpose and parameter behavior, the second provides usage context. Every word earns its place, and the information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 1 parameter with full schema coverage and no output schema, the description adequately covers the tool's purpose and basic usage. However, as a data retrieval tool with no annotations, it could benefit from more behavioral context about what 'memories' contain, format of returned data, or error conditions. It's minimally complete but has room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the key parameter. The description adds value by explaining the semantic behavior: 'omit to list all keys' clarifies what happens when the parameter is omitted. With 1 parameter and high schema coverage, this earns above the baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieve a previously stored memory by key, or list all stored memories (omit key).' It specifies the verb ('retrieve'/'list') and resource ('memory'), but doesn't explicitly differentiate from sibling tools like 'remember' or 'forget' beyond mentioning retrieval vs. saving context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It also explains the key parameter behavior ('omit key' to list all). However, it doesn't explicitly state when NOT to use it or mention alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly describes the tool's behavior as a storage operation, specifies persistence rules (authenticated vs. anonymous sessions), and implies it's a write operation. However, it doesn't mention potential limitations like storage size, rate limits, or error conditions, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the first sentence stating the core purpose and the second adding crucial context about persistence. Every sentence earns its place by providing essential information without redundancy or fluff, making it highly efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (a write operation with persistence rules), no annotations, and no output schema, the description does well by covering purpose, usage, and key behavioral traits. However, it lacks details on return values (e.g., confirmation message or error handling) and doesn't fully address all potential edge cases, leaving minor gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with good descriptions. The description adds minimal value beyond the schema by implying the parameters are used for storage but doesn't provide additional syntax, format, or usage details. This meets the baseline of 3 when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (likely for retrieval) and 'forget' (likely for deletion). It provides concrete examples of what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('to save intermediate findings, user preferences, or context across tool calls') and provides clear context about persistence differences ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which helps the agent decide when to use it versus alternatives like 'recall' for retrieval. It effectively guides usage without being misleading.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_stationsBInspect
Search radio stations by name. Returns station name, URL, country, genres, and popularity vote count. Use when looking for a specific station or browsing by keyword.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return. Defaults to 10. | |
| query | Yes | Station name to search for. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It adds useful context beyond the input schema: it specifies that 'Results are ordered by votes (popularity),' which informs the agent about sorting behavior. However, it lacks details on other behavioral traits such as rate limits, authentication needs, error handling, or what the output looks like (e.g., format, pagination). For a search tool with no annotations, this is a moderate but incomplete disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: it's two concise sentences that directly state the purpose and key behavioral trait (ordering by votes). Every sentence earns its place by providing essential information without redundancy or fluff, making it efficient and easy to parse for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (a search function with 2 parameters), no annotations, and no output schema, the description is partially complete. It covers the purpose and sorting behavior but misses details like output format, error cases, or usage context relative to siblings. Without annotations or output schema, more information would be helpful for the agent to fully understand the tool's behavior and integration.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, meaning the input schema fully documents both parameters ('query' and 'limit') with descriptions. The description adds no additional meaning beyond the schema; it doesn't explain parameter interactions, constraints, or usage examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search for radio stations by name.' It specifies the verb ('search') and resource ('radio stations'), and distinguishes it from siblings like 'get_top_stations' (which likely returns top stations without search) and 'list_countries'/'list_tags' (which list metadata). However, it doesn't explicitly differentiate from potential siblings like 'search_stations_by_genre' or 'search_stations_by_country', so it's not fully specific to all alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose 'search_stations' over 'get_top_stations' (e.g., for popularity-based results vs. search-based), 'list_countries' (e.g., for filtering by country), or 'list_tags' (e.g., for genre-based searches). There's only an implied usage based on the purpose, with no explicit context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!