Skip to main content
Glama
pentafive

Your Spotify MCP Server

by pentafive

get_top_tracks

Retrieve your most played Spotify tracks by play count for any time period, from all-time favorites to specific date ranges, with full track details and listening statistics.

Instructions

Get your most played tracks for any time period.

Returns your top tracks ranked by play count, with full track details and play statistics. This queries your complete listening history (not limited to Spotify's 50 recent tracks).

Time period options:

  • Omit dates for all-time top tracks

  • Specify start_date only for "since X" queries

  • Specify both dates for a specific range

Example queries:

  • "What are my top 10 songs?"

  • "What were my most played tracks in summer 2024?"

  • "Show me my top 20 songs from last month"

  • "What are my all-time top tracks?"

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
start_dateNoStart date in YYYY-MM-DD format. If omitted, includes all history.
end_dateNoEnd date in YYYY-MM-DD format. If omitted, includes up to today.
limitNoNumber of tracks to return (1-30). Default is 10.
output_formatNoOutput format: "toon" (default, 40-60% token savings) or "json"toon
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it queries complete listening history (not limited to Spotify's 50 recent tracks), returns full track details with play statistics, and explains time period options. However, it doesn't mention authentication requirements, rate limits, or potential data freshness issues.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with the core purpose, adds important behavioral context, explains parameter semantics through time period options, and provides concrete example queries. Every sentence serves a clear purpose with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with 100% schema coverage but no annotations or output schema, the description provides good context about what the tool does and how to use it. The example queries are particularly helpful. However, for a tool with no output schema, it could better describe the return format beyond 'full track details and play statistics' to help the agent understand the response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some value by explaining the semantics of date parameters (omitting dates for all-time, start_date only for 'since X' queries, both dates for specific range), but doesn't provide additional meaning beyond what the schema already covers for 'limit' and 'output_format' parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get your most played tracks for any time period' with specific details about ranking by play count and querying complete listening history. It distinguishes from sibling tools like 'get_top_artists' (different resource) and 'search_listening_history' (different query approach).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (for top tracks queries) and includes example queries that illustrate practical applications. However, it doesn't explicitly contrast with alternatives like 'get_track_stats' or 'get_track_rank' from the sibling list, which might offer overlapping functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pentafive/your-spotify-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server