musicurainz
Server Details
MusicBrainz MCP — wraps MusicBrainz Web Service v2 (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-musicbrainz
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: two are for searching (artists and releases), and two are for getting detailed information (artist and release). The descriptions explicitly differentiate them by resource type and action, with no overlap or ambiguity.
All tool names follow a consistent verb_noun pattern (get_artist, get_release, search_artists, search_releases). The naming is uniform and predictable, using snake_case throughout with clear verbs that match the actions.
With 4 tools, this server is well-scoped for its purpose of interacting with the MusicBrainz database. Each tool earns its place, covering essential operations for artists and releases without being too sparse or bloated.
The tool set provides complete coverage for the core domain of querying artist and release data, with search and get operations for both. A minor gap exists in lacking update or delete tools, but this is reasonable for a read-only database interface, and agents can work effectively with the provided tools.
Available Tools
4 toolsget_artistBInspect
Get detailed information about an artist including their release list. Use the MusicBrainz ID from search_artists.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | MusicBrainz artist ID (UUID). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. The description mentions retrieving 'detailed information' and 'release list' but doesn't specify what that includes (e.g., biography, genres, images), whether it's a read-only operation, potential rate limits, error conditions, or response format. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that are front-loaded: the first states the purpose, and the second provides usage guidance. There's no wasted text, and each sentence adds value. It could be slightly more structured by explicitly separating purpose from prerequisites.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose and a prerequisite but lacks details on behavioral aspects (e.g., what 'detailed information' entails, error handling) and doesn't leverage the absence of annotations to provide richer context. It meets the minimum viable threshold but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100% (the single parameter 'id' is documented as 'MusicBrainz artist ID (UUID)'), so the baseline is 3. The description adds minimal value beyond the schema by specifying 'Use the MusicBrainz ID from search_artists,' which provides context on where to obtain the ID but doesn't elaborate on parameter semantics like format constraints or usage nuances.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get detailed information about an artist including their release list.' It specifies the verb ('Get'), resource ('artist'), and scope ('detailed information... including their release list'). However, it doesn't explicitly differentiate from sibling tools like 'get_release' or 'search_artists' beyond mentioning the ID source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance: 'Use the MusicBrainz ID from search_artists.' This suggests a workflow dependency but doesn't explicitly state when to use this tool versus alternatives like 'search_artists' for finding artists or 'get_release' for release details. No explicit when-not-to-use or alternative scenarios are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_releaseAInspect
Get detailed information about a release including its full track listing. Use the MusicBrainz ID from search_releases.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | MusicBrainz release ID (UUID). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what the tool returns ('detailed information about a release including its full track listing') but doesn't mention potential limitations like rate limits, authentication requirements, error conditions, or response format. The description adds basic context but lacks comprehensive behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences that each serve a clear purpose: the first states what the tool does, and the second provides usage guidance. There's zero wasted language, and it's appropriately front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter lookup), 100% schema coverage, but no output schema or annotations, the description is minimally adequate. It explains what information is returned but doesn't describe the response structure or format. For a tool that returns 'detailed information,' more context about the output would be helpful since there's no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'id' documented as 'MusicBrainz release ID (UUID).' The description adds minimal value beyond this by mentioning 'Use the MusicBrainz ID from search_releases,' which provides usage context but no additional parameter semantics. This meets the baseline of 3 when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get detailed information about a release including its full track listing.' It specifies the verb ('Get'), resource ('release'), and scope ('detailed information including full track listing'). However, it doesn't explicitly distinguish this from sibling tools like search_releases, which appears to be a search function rather than a detailed lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'Use the MusicBrainz ID from search_releases.' This implies it should be used after obtaining an ID from the search_releases tool. However, it doesn't explicitly state when NOT to use it or mention alternatives like get_artist for artist information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_artistsBInspect
Search for music artists by name using the MusicBrainz database.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return. Defaults to 10. | |
| query | Yes | Artist name or search query. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the database source ('MusicBrainz') but does not cover key behavioral traits such as rate limits, authentication needs, error handling, or response format. For a search tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond its basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded with the core action and resource, making it easy to understand quickly. Every part of the sentence contributes to clarifying the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search function with 2 parameters), no annotations, and no output schema, the description is minimally adequate. It covers the basic purpose and data source but lacks details on behavioral aspects, error cases, or result structure. Without annotations or output schema, more context would be beneficial for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents both parameters ('query' and 'limit') with descriptions. The description adds minimal value beyond the schema by implying the query is for artist names, but does not provide additional syntax, format details, or usage examples. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search for music artists by name using the MusicBrainz database.' It specifies the verb ('Search'), resource ('music artists'), and method ('by name'), but does not explicitly differentiate it from sibling tools like 'search_releases' beyond the resource type. This makes it clear but not fully sibling-distinctive.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'by name' and 'MusicBrainz database,' but does not provide explicit guidance on when to use this tool versus alternatives like 'get_artist' or 'search_releases.' It lacks statements on when-not-to-use or direct comparisons, leaving usage somewhat inferred rather than clearly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_releasesCInspect
Search for albums and releases by title or query.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return. Defaults to 10. | |
| query | Yes | Release title or search query. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It only states the search functionality without mentioning any behavioral traits such as rate limits, authentication needs, pagination, or what happens on no results (e.g., returns empty list). This is inadequate for a search tool that likely interacts with external data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'Search for albums and releases by title or query.' It is front-loaded with the core purpose and contains no unnecessary words, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a search tool with no annotations and no output schema, the description is incomplete. It lacks information on behavioral aspects (e.g., how results are returned, error handling) and doesn't compensate for the missing output schema. This leaves significant gaps for an agent to understand the tool's full context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting both parameters ('query' and 'limit'). The description adds no additional meaning beyond the schema, as it only mentions 'title or query' without explaining syntax or format. This meets the baseline score of 3 since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search for albums and releases by title or query.' It specifies the verb ('search') and resource ('albums and releases'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'search_artists' beyond the resource type, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_release' (which might fetch a specific release) or 'search_artists' (which searches for artists instead), nor does it specify contexts or exclusions for usage. This lack of comparative information leaves the agent without clear direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!