Spotify
Server Details
Spotify MCP — Web API via client_credentials OAuth
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-spotify
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 17 of 17 tools scored. Lowest: 1.7/5.
Most tools have distinct purposes (e.g., 'ask_pipeworx' for general queries, 'compare_entities' for comparisons, 'get_album' for music). Minor overlap exists between 'ask_pipeworx' and 'entity_profile' for company data, but descriptions clarify the scope.
The set uses multiple naming conventions: 'get_*' for music, verb_noun for Pipeworx tools (e.g., 'resolve_entity', 'validate_claim'), and standalone verbs ('forget', 'recall'). While each subgroup is consistent, the overall mix reduces predictability.
17 tools is high for a coherent set, especially with two unrelated domains (Spotify music and Pipeworx financial/economic data). The server appears to bundle separate functionalities, leading to unnecessary complexity for a single purpose.
The Spotify subset lacks important features like playlist management, library operations, and playback control. The Pipeworx tools cover company research adequately but feel out of place. The combined surface is incomplete for either domain.
Available Tools
17 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description reveals auto-routing behavior across many sources but omits failure modes, latency, or fallback behavior. Burden on description is partially met but lacks edge-case transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences plus illustrative examples. Every sentence adds value: first defines action, second specifies scope, examples clarify usage. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-param query router with no output schema, the description covers purpose, scope, and typical usage adequately. However, lacks limitations (e.g., not for multi-step queries) and return value description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter. Description adds value beyond schema by giving examples and domain scope (SEC, FRED, etc.), enhancing the meaning of 'question'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool answers natural-language questions by auto-picking data sources, with verb 'answer' and explicit scope across 300+ sources, distinguishing it from siblings like search or compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases ('What is X?', examples) and rationale ('when you don't want to figure out which tool to call'), but does not mention when not to use or alternatives, though siblings are listed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose all behavioral traits. It explains that for companies it pulls financial data from SEC EDGAR/XBRL, and for drugs it pulls adverse events, approvals, and trials. It also mentions the output includes citation URIs. It does not mention rate limits or caching, but overall provides adequate transparency for a data retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at about 120 words across 6 sentences, with no unnecessary information. It front-loads the core purpose, then adds usage triggers, parameter details, and a note on efficiency. Each sentence serves a clear purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only 2 parameters and no output schema or annotations, the description provides enough context: what data is returned (paired data + citation URIs) and when to use it. It could be slightly more detailed about the output format, but the existing description is sufficient for an agent to understand the tool's role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers both parameters fully, but the description adds significant value by explaining the 'type' enum values and specifying the format and meaning of 'values' for each type (tickers for companies, drug names for drugs). This goes beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 companies or drugs side by side, with specific examples of user phrases that trigger its use. It distinguishes itself from siblings by noting it replaces 8-15 sequential calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit trigger phrases like 'compare X and Y' or 'X vs Y', covering common comparison scenarios. While it doesn't explicitly mention when not to use it, the context makes it clear this is for multi-entity comparisons, and alternatives like entity_profile exist for single entities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It explains the output ('Returns the top-N most relevant tools with names + descriptions') and implies a safe, read-only operation. No side effects mentioned, but for a discovery tool, this is sufficient. Could improve by noting if results are live or cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph with clear purpose statement first, followed by usage guidance, examples, and output description. Every sentence adds value; no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple discovery tool with 2 parameters and no output schema, the description fully covers what it does, when to call it, what to expect, and provides specific domain examples. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context by specifying that query is a 'natural language description' and gives default/max for limit, but this mostly aligns with schema descriptions. No additional semantic depth beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Find tools' and specifies the resource as 'tools by describing the data or task'. It lists example domains and distinguishes this discovery function from sibling tools that are task-specific (e.g., get_artist, compare_entities).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use: 'Use when you need to browse, search, look up, or discover what tools exist' and 'Call this FIRST when you have many tools available'. Implies not to use when you already know the tool, providing clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It clearly discloses the tool is read-only (returns data, no side effects), but does not explicitly state it is a read operation. The listed return types and citation URIs give good transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4 sentences), front-loaded with purpose and use cases, and efficiently lists what the tool returns without unnecessary detail. Every sentence adds unique value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregates multiple data sources) and lack of output schema, the description could be more complete by describing the return format structure. It lists contents but not how they are organized (e.g., keys, nesting). Adequate but leaves room for interpretation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. The description adds value by explaining the 'type' enum (only 'company' supported, future extensibility), providing example inputs for 'value' (ticker or CIK), and clarifying that names are not supported, which goes beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Get everything about a company in one call' and provides concrete use cases ('tell me about X', 'research Microsoft') and a clear list of returned data (SEC filings, fundamentals, patents, news, LEI), distinguishing itself from siblings like resolve_entity and compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit when-to-use scenarios, such as when a user asks for a company profile, and also tells the agent when not to use it (for names, use resolve_entity first), providing clear guidance on alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description clearly indicates deletion (mutation). Doesn't specify behavior for missing key, but overall adequate for a simple operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no waste. Action first, then usage conditions. Excellent efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one required param and no output schema, the description fully covers purpose, usage, and sibling relationships.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'key' (described as 'Memory key to delete'). Description adds no extra meaning, so baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool deletes a memory by key, distinguishing it from siblings remember and recall which store/retrieve.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use: context stale, task done, clear sensitive data. Also pairs with remember and recall for proper workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_albumDInspect
Album record with tracklist.
| Name | Required | Description | Default |
|---|---|---|---|
| market | No | Market for availability | |
| album_id | Yes | Spotify album ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations present, the description carries the full burden of disclosing behavioral traits like read-only nature, authentication needs, or side effects. It provides none of this information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
At 5 words, the description is extremely brief but lacks substantive content, making it under-specified rather than efficiently concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a retrieval tool with 2 parameters and no output schema, the description fails to explain the returned data (e.g., tracklist details) or any operational context, leaving the agent underinformed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All parameters are documented in the input schema with 100% coverage. The description adds no additional meaning beyond the schema, so it meets the baseline for this dimension.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Album record with tracklist' reads as a noun phrase describing what an album is, not as an action statement describing what the tool does. It fails to specify the retrieval verb and only weakly distinguishes from sibling tools like get_artist or get_track.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use get_album over alternative tools (e.g., get_track, get_artist) or any context about prerequisites or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_artistBInspect
Artist record: name, genres, popularity, followers, images.
| Name | Required | Description | Default |
|---|---|---|---|
| artist_id | Yes | Spotify artist ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fails to disclose any behavioral traits like authentication requirements, rate limits, error handling for non-existent IDs, or side effects. It only lists returned fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short and to the point, using a single sentence to convey core functionality. It could be slightly more structured but is efficient for its purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description lists some returned fields but does not specify details like data types, array structures, or optional fields. It is minimally sufficient but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage for the single parameter (artist_id) with a clear description. The tool description adds no additional semantic meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool returns an artist record with specific fields (name, genres, popularity, followers, images), effectively distinguishing it from sibling tools like get_album or get_track.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, such as get_artist_top_tracks or search. The description implies it's for fetching by ID but does not specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_artist_top_tracksBInspect
Top tracks for an artist in a market (Spotify recommendation).
| Name | Required | Description | Default |
|---|---|---|---|
| market | No | ISO market code (default "US") | |
| artist_id | Yes | Spotify artist ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, and description only labels it as a recommendation without detailing behavior such as rate limits, data freshness, or side effects. Minimal transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no redundant information. Efficient, though could benefit from slightly more detail without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, and description does not hint at return format or structure. Lacks details on default market behavior and recommendation criteria, leaving gaps for a recommended tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with parameter descriptions. Description adds 'Spotify recommendation' context but no additional semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies the resource (top tracks for an artist) and context (market, Spotify recommendation). Distinguishes from siblings like get_artist and get_track.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like search or get_artist. Lacks context on appropriate use cases or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_audio_featuresCInspect
Audio features for a track: tempo (BPM), key, mode (major/minor), time_signature, energy, danceability, valence, acousticness, instrumentalness, liveness, speechiness, loudness.
| Name | Required | Description | Default |
|---|---|---|---|
| track_id | Yes | Spotify track ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must carry the full burden of behavioral disclosure. It only lists output fields but does not state that it is a read-only operation, what happens if the track_id is invalid, or any rate limits. Minimal transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently lists the returned features. No fluff. However, it could benefit from slight restructuring (e.g., labeling them as fields in a JSON response) without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description enumerates the output fields, which is helpful but incomplete. It does not specify the return format (e.g., object), data types, or potential error states. Adequate for a simple list but not fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the description on track_id ('Spotify track ID'). The tool description adds no additional meaning beyond that. Baseline 3 applies as the schema does the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool retrieves audio features for a track and lists the specific features (e.g., tempo, key, energy). This distinguishes it from sibling tools like 'get_track' which likely provide basic track info, though no explicit contrast is given.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. For example, it does not mention that 'get_track' might also include some features, or that this is the dedicated tool for audio analysis. No when-not-to-use or prerequisite info.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trackAInspect
Single track by Spotify ID. Returns artists, album, popularity, preview URL, duration.
| Name | Required | Description | Default |
|---|---|---|---|
| market | No | Market for availability | |
| track_id | Yes | Spotify track ID (base62) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so the description should disclose behavioral traits. It mentions returned fields but omits side effects (none), rate limits, authentication needs, or that it is a read-only operation. The description adds minimal behavioral context beyond the input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the resource name and immediately lists key returned fields. Every word adds value; no filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with 2 parameters and no output schema, the description covers the purpose and return fields adequately. However, it does not explain optional parameters or response format, but the lack of an output schema reduces the burden. Still, it is sufficient for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of parameters with descriptions. The description mentions 'by Spotify ID', reinforcing the track_id parameter's role, but does not explain the market parameter or add new semantic value beyond the schema. Baseline of 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a single track by Spotify ID and lists key returned fields (artists, album, popularity, preview URL, duration). This verb+resource structure distinguishes it from siblings like get_album or get_artist.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is used when you have a Spotify track ID, but it does not explicitly state when to use it over alternatives like search or get_artist_top_tracks. No guidance on prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description discloses rate limits (5 per identifier per day), that it's free, doesn't count against quota, and team reviews digests daily. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise paragraph front-loading purpose, then providing usage scenarios and policies. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple feedback submission; description covers what, when, and how to submit, plus policies. No output schema needed; complete for its purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description doesn't add significant meaning beyond schema, though it reinforces usage context for the type parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is for telling the Pipeworx team about something broken, missing, or needed. It uses specific verbs and distinguishes from siblings by focusing on feedback submission.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (bug, feature/data_gap, praise) and what not to do (don't paste user prompt). Also mentions rate limits and that it's free, providing clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It implies a read-only operation via 'retrieve' and 'list', but does not explicitly state idempotency or that no destructive actions occur. The scoping and pairing info are helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise, well-structured sentences. The primary action is front-loaded, and every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple key-value retrieval without an output schema, the description is mostly complete. It explains retrieval and listing, scoping, and pairing. Lacks explicit mention of return format or error cases, but is sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of parameters. The description adds semantic value by explaining the behavior when 'key' is omitted (list all keys) and providing examples of stored values (ticker, address, research notes), beyond the schema's own description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a saved value or lists keys, using specific verbs and resources. It distinguishes itself from siblings 'remember' and 'forget' by explicitly pairing with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies when to use this tool: to look up context stored earlier without re-deriving it. It implies alternatives (use 'remember' to save, 'forget' to delete) and mentions scoping to identifier, providing clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It explains the multi-source fan-out (SEC, GDELT, USPTO) and the return format, though it does not mention rate limits or authentication.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with three sentences front-loaded with the core purpose, no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description explains the return structure (structured changes, count, URIs) and is sufficient for this query tool with well-documented parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds valuable context beyond the schema, such as the allowed formats for 'since' and examples for 'value', and recommends '30d' or '1m' for typical monitoring.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as retrieving recent changes for a company, with a specific verb ('What's new') and resource ('a company'), and distinguishes it from siblings like search or entity_profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases with example queries, but does not explicitly state when not to use it or name alternatives among siblings, though the positive guidance is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses persistence behavior (authenticated vs anonymous sessions, 24-hour retention), scoping by identifier, and that it's a save operation. Missing size limits or overwrite details, but sufficient for this tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff, front-loaded with main purpose. Every sentence adds essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple key-value memory tool with no output schema, description covers purpose, usage, behavioral traits, and sibling references. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both params fully (100% coverage), but description adds value by giving example key names and stating value is 'any text'. Also explains scoping, slightly enhancing schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Save data the agent will need to reuse later' and mentions key-value pair storage, scoped by identifier. It distinguishes from sibling tools 'recall' and 'forget' by pairing with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly says 'Use when you discover something worth carrying forward' and pairs with recall/forget. It does not provide explicit when-not-to-use, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description details output: IDs plus pipeworx:// citation URIs, with examples. Does not mention error handling or behavior when entity not found, but overall fairly transparent given the tool's simplicity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise: three sentences that front-load the purpose, include examples, and provide usage direction. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately explains return values (IDs and citation URIs) and provides examples. Covers usage context and integration with other tools, making it complete for a lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds meaningful context by providing examples of valid values for both 'type' and 'value' parameters, such as 'AAPL' or '0000320193' for company, and 'ozempic' for drug.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly specifies the tool resolves canonical identifiers for companies and drugs, listing specific ID systems (CIK, ticker, RxCUI, LEI) and giving examples. It distinguishes from sibling tools by stating it should be used before other tools needing official identifiers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: when a user mentions a name and you need official IDs for subsequent tools. Mentions it replaces multiple lookup calls, but does not explicitly exclude usage or mention alternative tools for other scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchAInspect
Search the Spotify catalog. type = comma-separated of album, artist, playlist, track, show, episode, audiobook. Returns matched items in named sections.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Comma-separated types (default "track") | |
| limit | No | 1-50 (default 20) | |
| query | Yes | Search query | |
| market | No | ISO 3166-1 alpha-2 market code (e.g., "US") | |
| offset | No | 0-1000 (default 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that results are in named sections, but does not describe pagination, rate limits, or read-only nature. Requires schema annotations for full transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key action. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers search functionality and return format. With 100% schema coverage and no output schema, it is nearly complete, though 'named sections' could be clarified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds value by listing valid types for the 'type' parameter beyond schema's default-only description. Other parameters rely on schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it searches the Spotify catalog and lists the types (album, artist, etc.), distinguishing it from specific lookup tools like get_album.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this vs. alternatives (e.g., get_* tools). Implied usage for general search, but lacks exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes return verdict types, citation, and delta; notes it replaces multiple sequential calls. No annotations provided, so the description carries this burden well, though it does not explicitly state it is read-only or list potential side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action, concise yet informative. No extraneous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage context, return values, and limitations thoroughly. No output schema but the description sufficiently explains what the tool returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes the 'claim' parameter with 100% coverage. Description adds domain-specific context (company-financial claims) but does not further elaborate on parameter format beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool validates factual claims against authoritative sources. Differentiates from siblings like 'search' by being specialized for fact-checking natural-language statements.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use when an agent needs to check truth of user statements, with examples. Mentions current limitation to company-financial claims, guiding appropriate use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!