Skip to main content
Glama

musicurainz

Server Details

MusicBrainz MCP — wraps MusicBrainz Web Service v2 (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-musicbrainz
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: two are for searching (artists and releases), and two are for getting detailed information (artist and release). The descriptions explicitly differentiate them by resource type and action, with no overlap or ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (get_artist, get_release, search_artists, search_releases). The naming is uniform and predictable, using snake_case throughout with clear verbs that match the actions.

Tool Count5/5

With 4 tools, this server is well-scoped for its purpose of interacting with the MusicBrainz database. Each tool earns its place, covering essential operations for artists and releases without being too sparse or bloated.

Completeness4/5

The tool set provides complete coverage for the core domain of querying artist and release data, with search and get operations for both. A minor gap exists in lacking update or delete tools, but this is reasonable for a read-only database interface, and agents can work effectively with the provided tools.

Available Tools

4 tools
get_artistBInspect

Get detailed information about an artist including their release list. Use the MusicBrainz ID from search_artists.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesMusicBrainz artist ID (UUID).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description mentions retrieving 'detailed information' and 'release list' but doesn't specify what that includes (e.g., biography, genres, images), whether it's a read-only operation, potential rate limits, error conditions, or response format. For a tool with no annotations, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that are front-loaded: the first states the purpose, and the second provides usage guidance. There's no wasted text, and each sentence adds value. It could be slightly more structured by explicitly separating purpose from prerequisites.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose and a prerequisite but lacks details on behavioral aspects (e.g., what 'detailed information' entails, error handling) and doesn't leverage the absence of annotations to provide richer context. It meets the minimum viable threshold but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100% (the single parameter 'id' is documented as 'MusicBrainz artist ID (UUID)'), so the baseline is 3. The description adds minimal value beyond the schema by specifying 'Use the MusicBrainz ID from search_artists,' which provides context on where to obtain the ID but doesn't elaborate on parameter semantics like format constraints or usage nuances.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get detailed information about an artist including their release list.' It specifies the verb ('Get'), resource ('artist'), and scope ('detailed information... including their release list'). However, it doesn't explicitly differentiate from sibling tools like 'get_release' or 'search_artists' beyond mentioning the ID source.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance: 'Use the MusicBrainz ID from search_artists.' This suggests a workflow dependency but doesn't explicitly state when to use this tool versus alternatives like 'search_artists' for finding artists or 'get_release' for release details. No explicit when-not-to-use or alternative scenarios are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_releaseAInspect

Get detailed information about a release including its full track listing. Use the MusicBrainz ID from search_releases.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesMusicBrainz release ID (UUID).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what the tool returns ('detailed information about a release including its full track listing') but doesn't mention potential limitations like rate limits, authentication requirements, error conditions, or response format. The description adds basic context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just two sentences that each serve a clear purpose: the first states what the tool does, and the second provides usage guidance. There's zero wasted language, and it's appropriately front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter lookup), 100% schema coverage, but no output schema or annotations, the description is minimally adequate. It explains what information is returned but doesn't describe the response structure or format. For a tool that returns 'detailed information,' more context about the output would be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'id' documented as 'MusicBrainz release ID (UUID).' The description adds minimal value beyond this by mentioning 'Use the MusicBrainz ID from search_releases,' which provides usage context but no additional parameter semantics. This meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get detailed information about a release including its full track listing.' It specifies the verb ('Get'), resource ('release'), and scope ('detailed information including full track listing'). However, it doesn't explicitly distinguish this from sibling tools like search_releases, which appears to be a search function rather than a detailed lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Use the MusicBrainz ID from search_releases.' This implies it should be used after obtaining an ID from the search_releases tool. However, it doesn't explicitly state when NOT to use it or mention alternatives like get_artist for artist information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_artistsBInspect

Search for music artists by name using the MusicBrainz database.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return. Defaults to 10.
queryYesArtist name or search query.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the database source ('MusicBrainz') but does not cover key behavioral traits such as rate limits, authentication needs, error handling, or response format. For a search tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond its basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded with the core action and resource, making it easy to understand quickly. Every part of the sentence contributes to clarifying the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search function with 2 parameters), no annotations, and no output schema, the description is minimally adequate. It covers the basic purpose and data source but lacks details on behavioral aspects, error cases, or result structure. Without annotations or output schema, more context would be beneficial for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents both parameters ('query' and 'limit') with descriptions. The description adds minimal value beyond the schema by implying the query is for artist names, but does not provide additional syntax, format details, or usage examples. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search for music artists by name using the MusicBrainz database.' It specifies the verb ('Search'), resource ('music artists'), and method ('by name'), but does not explicitly differentiate it from sibling tools like 'search_releases' beyond the resource type. This makes it clear but not fully sibling-distinctive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'by name' and 'MusicBrainz database,' but does not provide explicit guidance on when to use this tool versus alternatives like 'get_artist' or 'search_releases.' It lacks statements on when-not-to-use or direct comparisons, leaving usage somewhat inferred rather than clearly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_releasesCInspect

Search for albums and releases by title or query.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return. Defaults to 10.
queryYesRelease title or search query.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It only states the search functionality without mentioning any behavioral traits such as rate limits, authentication needs, pagination, or what happens on no results (e.g., returns empty list). This is inadequate for a search tool that likely interacts with external data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence: 'Search for albums and releases by title or query.' It is front-loaded with the core purpose and contains no unnecessary words, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a search tool with no annotations and no output schema, the description is incomplete. It lacks information on behavioral aspects (e.g., how results are returned, error handling) and doesn't compensate for the missing output schema. This leaves significant gaps for an agent to understand the tool's full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting both parameters ('query' and 'limit'). The description adds no additional meaning beyond the schema, as it only mentions 'title or query' without explaining syntax or format. This meets the baseline score of 3 since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search for albums and releases by title or query.' It specifies the verb ('search') and resource ('albums and releases'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'search_artists' beyond the resource type, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_release' (which might fetch a specific release) or 'search_artists' (which searches for artists instead), nor does it specify contexts or exclusions for usage. This lack of comparative information leaves the agent without clear direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.