Matthew Hartley Music MCP
Server Details
The first artist-owned MCP server. Discover, narrate, and stream Matthew Hartley's debut album The Time Is Now from any compatible AI client. Exposes 8 tools (list_songs, get_song, list_chapters, get_chapter, get_artist, get_experience, get_experience_prompt, refresh_stream_urls) over a public HTTP endpoint with no auth. Apache 2.0 licensed.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 8 of 8 tools scored.
Each tool has a clearly distinct purpose: artist profile, chapters, songs, curated experiences, AI prompts, and URL refreshing. No overlapping functionality.
All tools follow a consistent verb_noun pattern using snake_case (get_*, list_*, refresh_*), making them predictable and easy to understand.
With 8 tools, the server is well-scoped for a music artist content server, covering key operations without unnecessary bloat or thinness.
Core functionality for browsing artist, chapters, songs, and experiences is covered. A minor gap is the lack of a tool to list all experiences or manage resources, but given the read-only nature, this is acceptable.
Available Tools
8 toolsget_artistAInspect
Get the artist profile including bio, images, social links, narrative voice, and AI presentation directives. Use this to introduce the artist to first-time listeners.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It indicates a read operation and lists returned content, but does not disclose potential side effects, auth requirements, or rate limits. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences: one clearly states purpose and content, the other provides usage guidance. No superfluous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description enumerates the key data categories returned. For a simple parameterless tool, this provides sufficient context for an agent to understand what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters in input schema, so baseline is 4. Description does not need to add param info beyond what schema already covers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves an artist profile and lists specific content (bio, images, social links, narrative voice, AI presentation directives). It distinguishes from sibling tools which deal with chapters, experiences, songs, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use this to introduce the artist to first-time listeners.' No exclusions or alternatives are mentioned, but context shows no other artist retrieval tools exist among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_chapterAInspect
Get a single chapter with all ACF fields and its list of songs.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | No | Chapter slug (e.g., "awakening") | |
| term_id | No | Chapter term ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries the burden. It states the tool retrieves a chapter with fields and songs, which is transparent for a read operation. However, it does not disclose behavior when neither slug nor term_id is provided, though they are optional in schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence, front-loaded with action and resource, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so description should clarify return structure. It mentions ACF fields and list of songs but omits other potential fields like title, date, etc.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both parameters have descriptions in the schema). The tool description adds no additional semantic meaning beyond what schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets a single chapter with ACF fields and songs, distinguishing it from siblings like list_chapters (list vs single) and get_song (different resource).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., list_chapters for multiple chapters). Usage is implied but not clarified.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_experienceAInspect
Returns a fully curated, sequenced playlist for a listening mode — including song data, stories, quotes, cover art, visual scene directives, and fresh stream URLs — in a single call. Use this instead of calling get_song + get_stream_url per track.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Listening mode: late_night, devotional, acoustic_focus, cinematic, quiet_listening, full_journey. Defaults to full_journey if omitted. | |
| limit | No | Max songs to return (default 5, or 20 for full_journey) | |
| chapter_slug | No | Optional: filter to a single chapter slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With empty annotations, the description discloses what the tool returns (song data, stories, quotes, etc.) and that it is a single-call operation. It does not cover side effects or authentication needs, but the disclosed behavior is sufficient for typical use.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the core purpose, and contains no redundant information. Every phrase adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema and optional parameters, the description covers the return contents well. It lacks default behavior details and error scenarios, but the schema provides defaults and the description is otherwise complete for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes all three parameters (mode, limit, chapter_slug) with defaults and allowed values. The description adds no extra semantic detail beyond what the schema provides, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description precisely states the tool returns a curated playlist for a listening mode, including detailed contents like song data, stories, quotes, etc. It distinguishes from siblings by explicitly advising against calling get_song and get_stream_url individually.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly directs to use this tool instead of per-track calls, setting usage context. However, it does not mention scenarios where this tool might be inappropriate or when alternatives like get_experience_prompt are better.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_experience_promptAInspect
Returns the curated Claude prompt, intro hint, pairing hint, arc role, and experience recipes for a song. Use this to get first-class directives for how to present a song rather than interpreting raw metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Optional: listening mode to select the most relevant starter prompt | |
| slug | Yes | Song slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full behavioral burden but only describes output, not side effects, auth needs, or rate limits. A read operation likely has minimal side effects, but the disclosure is absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise, front-loaded sentences with no filler. Every sentence adds value: first lists outputs, second provides usage context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description enumerates all return components (prompt, hints, role, recipes), providing sufficient completeness for an agent to know what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds no new parameter meaning beyond the schema; it only reiterates the song context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns specific curated content (Claude prompt, intro hint, pairing hint, arc role, experience recipes) for a song, distinguishing it from siblings that may return raw metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises using this tool 'to get first-class directives for how to present a song rather than interpreting raw metadata', implying when it is appropriate but not explicitly naming alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_songAInspect
Get a single song with all ACF fields including lyrics, quotes, cover art URLs, streaming links, experience data, and chapter assignment.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Song post ID | |
| slug | No | Song slug (e.g., "holding-on") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries the burden. It describes what data is returned but does not disclose behavior for edge cases (e.g., if both id and slug are provided, or if neither is provided). There is no mention of authentication, rate limits, or side effects. It is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence, no fluff. Every word adds value. Front-loaded with the main purpose, then lists key fields. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description must explain returns. It does so by listing many ACF fields. However, it could be more structured (e.g., indicating response format). Still, for a simple retrieval tool with few params, it is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both 'id' and 'slug'. The description adds no new meaning beyond the schema; it lists response fields but not parameter usage. Baseline 3 is appropriate as schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves a single song with all ACF fields, including specific content like lyrics, quotes, cover art URLs, etc. This distinguishes it from sibling tools like list_songs (which returns multiple) and get_artist (different entity).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description implies it's for detailed song retrieval, but does not mention exclusions or when to use siblings (e.g., list_songs for browsing, get_artist for artist info). Context gives some differentiation but not formally.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_chaptersAInspect
List all chapters with numeral, name, year range, intro, banner image URL, mood tags, and song count.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It only states it lists all chapters without mentioning any behavioral traits like read-only, auth requirements, pagination, or performance considerations. Minimal transparency beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with front-loaded purpose and specific output details. No redundant words; every part of the description is informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters and no output schema, the description provides a reasonably complete picture of what is returned. However, it lacks mention of ordering, limit, or any filtering capability, which would be helpful for a complete understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, baseline is 4 per rubric. The description adds value by detailing the output fields (year range, intro, banner URL, etc.), which compensates for the lack of input parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and the resource 'all chapters', and specifies the returned fields (numeral, name, year range, etc.), distinguishing it from siblings like get_chapter which retrieves a single chapter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for listing all chapters, but does not explicitly mention when to avoid using it or compare with alternatives like get_chapter or list_songs. Usage context is only indirectly inferred from sibling names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_songsAInspect
List all songs with title, year, chapter, release status, and summary. Optionally filter by chapter slug or release status.
| Name | Required | Description | Default |
|---|---|---|---|
| chapter | No | Filter by chapter slug (e.g., "awakening", "innocence-heartbreak") | |
| release_status | No | Filter by status: "released", "upcoming", or "archived" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes returned fields but does not mention ordering, pagination, or performance characteristics. With no annotations, this leaves gaps for an agent needing to handle large datasets.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with the main purpose. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple list with two optional parameters. Lacks output schema info and pagination details, but is mostly complete given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already has 100% coverage with descriptions. Description adds minimal extra meaning beyond stating the filters are optional.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'List all songs with title, year, chapter, release status, and summary', using a specific verb and resource. Distinguishes from siblings like get_song (single song) and list_chapters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clearly indicates optional filters by chapter slug or release status. Does not explicitly discuss when not to use this tool or mention alternatives, but sibling tool names provide implicit context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
refresh_stream_urlsAInspect
Accepts an array of song slugs and returns fresh signed stream URLs for all of them in one call. Use this ~2 minutes before stream URLs expire to keep playback uninterrupted.
| Name | Required | Description | Default |
|---|---|---|---|
| slugs | Yes | Array of song slugs to refresh, e.g. ["holding-on","without-you"] |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that the tool returns fresh signed URLs, implying a read/refresh operation. However, it does not mention authentication requirements, rate limits, or behavior on invalid slugs, leaving gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two concise sentences with no redundant words. It front-loads the core function and follows with usage guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description is fairly complete. It explains the purpose, provides usage timing, and references an example in the schema. It could mention the return format, but it's not essential given the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the schema includes a clear example for the 'slugs' parameter. The description only reiterates 'array of song slugs,' adding no semantics beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts an array of song slugs and returns fresh signed stream URLs. It uses a specific verb ('returns') and resource ('signed stream URLs'), distinguishing it from sibling tools which focus on getting metadata or lists.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit timing guidance: 'Use this ~2 minutes before stream URLs expire to keep playback uninterrupted.' This tells the agent when to invoke the tool. It does not explicitly mention when not to use or alternatives, but siblings are unrelated, reducing the need.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!