Skip to main content
Glama

Server Details

NBA MCP — player, team, and game data via the BallDontLie API

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-nba
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation3/5

Most tools are clearly distinct (NBA vs memory/pipeworx), but ask_pipeworx overlaps with the NBA tools since it can also answer NBA questions, potentially causing confusion. The NBA tools are well separated from each other.

Naming Consistency4/5

All tools use snake_case with verb_noun pattern (e.g., get_games, search_players). The only minor inconsistency is ask_pipeworx and discover_tools which follow a different verb style (ask, discover) but still readable.

Tool Count4/5

9 tools is a reasonable count for an NBA server with supplementary memory and pipeworx tools. It's slightly above the ideal scope but not excessive.

Completeness3/5

The NBA tools cover teams, players, and games but are missing common operations like getting standings, player stats, or team stats. The memory and pipeworx tools add extra utility but are not part of the NBA domain.

Available Tools

9 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions that the tool picks the right tool and fills arguments, indicating some autonomy, but does not disclose any limitations, error conditions, or what 'best available data source' means. This is adequate but leaves some ambiguity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (three sentences) and front-loaded with the key action. Every sentence adds value: first states purpose, second explains mechanics, third gives examples. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has a single parameter, no output schema, and no annotations, the description is complete enough for an agent to understand its use. It could be improved by mentioning that it returns a string answer, but the examples imply that.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'question' that has a description. The description adds value by explaining that the question should be natural language and providing examples, which goes beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers questions in plain English, using the best data source, and distinguishes it from sibling tools by emphasizing that it abstracts away tool selection and schema details. The examples further clarify its purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to describe needs without browsing tools or learning schemas, which implies when to use this tool (for natural language queries). However, it does not explicitly mention when not to use it or provide alternatives, but given the sibling tools are specific-purpose (e.g., get_teams, remember), the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must convey behavioral traits. It states the tool returns the most relevant tools with names and descriptions, but does not disclose whether it modifies state, requires authentication, or has rate limits. The description is adequate but could mention if it's read-only or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each adding value: purpose, return value, and usage guidance. No filler words. Front-loaded with key action. Highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 simple parameters, 100% schema coverage, no output schema, and a straightforward search task, the description is nearly complete. It could mention that results are ranked by relevance, but not essential. The description provides enough for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes both parameters. The description adds minimal extra meaning beyond the schema, merely hinting at the query format with examples. The limit parameter is fully covered by schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches a tool catalog using natural language queries, returns relevant tools with names and descriptions, and explicitly distinguishes it as the first tool to call when many tools are available. The verb 'search' and resource 'tool catalog' are specific and actionable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task,' providing clear guidance on when to use it. It also implies that after using this tool, you would invoke specific tools returned, distinguishing it from sibling tools like search_players or get_teams.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden. It clearly states the action (delete) and the required key, but does not disclose whether the operation is irreversible, requires permissions, or affects other memories.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no filler words, front-loading the action and resource efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter delete operation with no output schema, the description is nearly complete. It lacks only behavioral details like irreversibility, but these are partly implied by 'Delete'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the key parameter. The description adds minimal value by repeating that the key identifies the memory to delete, but does not elaborate on format constraints or examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Delete') and resource ('stored memory by key'), clearly distinguishing from sibling tools like 'remember' (store) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies deletion via key, but does not explicitly state when to use this tool versus alternatives like 'remember' or 'recall', nor does it mention any prerequisites or side effects.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_gamesAInspect

Get NBA games for a season (e.g., 2023, 2024). Returns game date, status, matchup teams, and final or live scores.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default: 25, max: 100)
seasonYesSeason start year (e.g., 2024 for the 2024-25 season)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It discloses that the tool returns game data but does not mention idempotency, potential errors, rate limits, or pagination beyond the limit parameter. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and result details. Every sentence is useful with no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, no output schema, no nested objects), the description adequately covers purpose and return fields. It could mention that results are paginated, but the limit parameter implies it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds no extra meaning beyond the schema, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies a clear verb ('Get') and resource ('NBA games') with scope ('for a given season') and lists return fields ('game date, status, teams, and scores'). It effectively distinguishes from siblings like get_player and get_teams.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states when to use (for a given season) but does not explicitly mention when not to use or provide alternatives among siblings. The context is clear but lacks exclusions or comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_playerAInspect

Get detailed NBA player profile including career stats, season-by-season performance, biographical info, and team history. Requires player ID from search_players.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesBallDontLie player ID
_apiKeyYesBallDontLie API key
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must disclose behavioral traits. It only states the tool retrieves a profile, but does not mention idempotency, potential errors, rate limits, or response structure. The tool name implies a read operation, but the description adds no behavioral depth beyond that.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Essential information is front-loaded: verb, resource, and key identifier.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is simple (1 param, no output schema, no nested objects) and has clear sibling differentiation, the description is adequate. However, without annotations, it could benefit from mentioning that the tool is safe to call repeatedly (read-only) or that the ID is numeric.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (only one parameter with a description), so baseline is 3. The description adds context by clarifying that the ID is from BallDontLie, which is a meaningful addition beyond the schema's generic 'BallDontLie player ID'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies a concrete verb ('Get'), a clear resource ('detailed profile for a single NBA player'), and the unique identifier ('BallDontLie player ID'). It clearly distinguishes from siblings like 'search_players' (which searches) and 'get_teams' (different resource).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you have a specific player ID and want a detailed profile, but does not explicitly state when to use it versus alternatives like 'search_players'. No exclusion or guidance on required authorization or data freshness.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_teamsAInspect

Get all 30 NBA teams with full names, abbreviations, conference, and division. Use to find team info or prepare for get_games queries.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description clearly indicates the tool is read-only (listing) and returns a fixed set of data (30 NBA teams with specific fields). With no annotations provided, the description effectively communicates the safe, non-destructive behavior. Could be improved by noting whether results are sorted or if any caching applies, but sufficient for the simple use case.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the action ('List all 30 NBA teams') and includes relevant detail. Every word adds value, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and no output schema, the description covers the purpose and return content well. It could mention that the result is a list of team objects, but that is implicit. Overall, complete for a straightforward listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has no parameters, so the schema coverage is 100%. The description adds meaning by specifying the return attributes (full names, abbreviations, conference, division), which is helpful. Since there are zero parameters, the baseline is 4 and the description does not need to add param info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all 30 NBA teams and specifies the included attributes (full names, abbreviations, conference, division). It distinguishes itself from siblings by being a simple retrieval of all teams, unlike get_player or search_players which target individual or filtered results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for listing all teams, but does not explicitly state when to use this vs alternatives like search_players or get_player. No exclusions or prerequisites are mentioned, which is acceptable for a simple list-all tool with no parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must carry the full burden. It discloses that omitting the key lists all memories, and that retrieval works across sessions. No contradictions with annotations (none exist).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise, using two sentences to convey purpose, usage, and behavior. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (one optional parameter, no output schema), the description is complete enough. It explains both retrieval modes (by key or list all) and cross-session persistence.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a clear description for the 'key' parameter. The description adds context by explaining the effect of omitting the key, but the schema already covers the parameter semantics sufficiently.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieve a stored memory by key or list all memories. It distinguishes itself from sibling tools like 'remember' and 'forget' by focusing on retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance on when to use the tool ('to retrieve context you saved earlier'), implying it should be used after 'remember'. However, it doesn't explicitly exclude other scenarios or mention when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses persistence behavior (authenticated vs anonymous sessions) which is critical for the agent to understand data retention. However, it does not mention whether overwriting an existing key is allowed or if there are limits on storage size or number of keys.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no fluff. The first sentence defines the action and resource, the second adds usage guidelines and persistence details. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple key-value store with only two string parameters, the description covers purpose, usage, and persistence. It lacks mention of whether duplicate keys are allowed or limits, but given the low complexity, the description is nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear examples for the key parameter and value description. The description adds context about how parameters relate to the tool's purpose (saving findings, preferences, etc.), which supplements the schema's technical definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('store') and resource ('key-value pair in session memory'), clearly distinguishing it from siblings like 'forget' (remove) and 'recall' (retrieve). It also lists example use cases, making the purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly tells when to use this tool (saving intermediate findings, user preferences, context across calls) and distinguishes it from 'forget' and 'recall' by the nature of the action. However, it does not explicitly state when NOT to use it or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_playersAInspect

Search for NBA players by name. Returns position, height, weight, college, and current team. Use get_player with the player ID for detailed stats and career history.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default: 10, max: 100)
queryYesPlayer name or partial name to search for
_apiKeyYesBallDontLie API key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It states that the tool returns a profile with specific fields, but does not disclose any behavioral traits such as rate limits, authorization needs, or whether the search is case-sensitive. The behavior is predictable but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the primary action, and every sentence adds value. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 2 parameters, no output schema, and no annotations, the description provides enough to understand basic functionality but lacks completeness on edge cases, error handling, or response format. It is adequate for a simple search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description adds value by summarizing the result fields, but does not elaborate on parameter nuances beyond what the schema provides. However, since coverage is high, a baseline of 3 is appropriate, and the description's clarity earns a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Search') and resource ('NBA players by name'), and clearly states what is returned (player profile with position, height, weight, college, current team). This effectively distinguishes it from siblings like get_player which likely returns a single player, and get_teams which focuses on teams.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching by name, but does not provide explicit guidance on when to use this tool versus alternatives like get_player or get_teams. There are no exclusion criteria or hints about when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.