Sports Trading Card Agent
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 9 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It discloses important behavioral context that 'Partial names work' for fuzzy matching and lists covered stat categories (passing/rushing/etc.). However, lacks disclosure on error handling, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear Args and Returns sections. Front-loaded purpose statement. Returns section is slightly redundant given output schema exists, but every sentence adds value. No excessive verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter lookup tool with output schema present, description adequately covers purpose, parameter semantics, and partial matching behavior. Missing error state documentation (e.g., ambiguous partial matches), but reasonable completeness given tool simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, requiring description to fully compensate. Args section provides excellent semantics: player_name includes multiple examples and notes partial matching behavior; season includes example and default value note. Fully documents both parameters beyond what schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Look up' and resources 'NFL player stats' and 'card market insights'. Distinguishes from mlb_stats_lookup via 'NFL' specificity, but does not explicitly differentiate from generic player_stats_lookup sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like player_stats_lookup or card_market_analysis. While the NFL domain is clear, there is no 'when-not-to-use' or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses that 'partial names work' (critical behavioral trait) and outlines return contents ('Player bio, current season stats, and card market insight'). However, it lacks information on error handling, rate limits, or matching behavior when multiple players match a partial name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose statement followed by Args and Returns sections. Every sentence serves a documentation need. Slightly verbose due to docstring-style formatting, but highly readable and appropriately sized for the parameter complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (not shown but indicated), the description appropriately summarizes return values without overspecifying. Parameters are thoroughly documented. Minor gap: sibling tool relationships remain unaddressed, which would be helpful given the nine siblings including overlapping sporting domains.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by documenting all three parameters with examples (e.g., 'LeBron James'), enumerated valid values for 'sport' ('nba', 'nfl', 'mlb'), notes on partial name matching, and default values. This maximizes clarity for a zero-coverage schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Look up') and resources ('player stats', 'card market insights'), with explicit scope covering NBA, NFL, or MLB. However, it fails to distinguish from siblings like 'mlb_stats_lookup' or 'nfl_stats_lookup'—it doesn't state when to use this multi-sport version versus the sport-specific alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to select this tool versus 'mlb_stats_lookup', 'nfl_stats_lookup', or the various card-specific siblings. No prerequisites or exclusion criteria mentioned despite the complex sibling landscape.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully explains the data inputs (market trends, player performance) and output format (recommendation with confidence level, supporting data). However, it omits operational details like data freshness, calculation methodology, or rate limits that would help agents understand behavioral constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The docstring-style structure (description + Args + Returns) is well-organized and front-loaded with the core purpose. Every section is necessary given the 0% schema coverage. The Returns section risks redundancy with the existing output schema, though without seeing that schema's content, its inclusion is justified.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 3-parameter complexity with zero schema coverage, the Args section provides necessary completeness. The presence of an output schema means the Returns section is optional but not harmful. Missing only explicit sibling differentiation to be fully complete for an agent selecting between multiple card analysis tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates through the Args section. It provides clear semantic meaning for each parameter: card_query includes specific format examples, player_name explains the cross-reference purpose, and sport enumerates valid values (nba/nfl/mlb) with defaults. This adds critical meaning absent from the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides 'buy/sell/hold recommendations' for sports cards using market trends and player data. The specific advisory nature (BUY/SELL/HOLD/AVOID) distinguishes it from sibling lookup tools like card_price_lookup or player_stats_lookup, though it could explicitly clarify when to choose this over card_market_analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The Args section provides excellent parameter-level guidance (e.g., player_name 'improves accuracy', sport defaults to 'nba'), specifying required vs optional inputs. However, it lacks explicit guidance on when to use this investment advisor versus siblings like card_market_analysis or vintage_card_analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains the return structure (average, median, low, high, individual listings) and clarifies the semantic difference between sold and active listings. However, it omits operational details like rate limits, data freshness, or error behavior when cards are not found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The Args/Returns docstring structure is appropriate for the complexity. The query examples are necessary domain guidance (trading card searches require specific formats) and the listing_type explanation prevents ambiguity. No sentences are gratuitous, though the Returns section slightly overlaps with the existing output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 simple parameters and an output schema (per context signals), the description is adequately complete. It documents all parameters and summarizes return values without needing to replicate the full output schema. No critical gaps exist for a lookup operation of this scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (only titles provided), the description provides essential compensation by detailing query syntax with concrete examples, explaining the enum-like behavior of listing_type, and documenting the limit range (1-50). This rich semantic context prevents parameter misuse despite the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Look up current market prices for a sports trading card' provides a specific verb and resource. While it doesn't explicitly namedrop siblings, the functional focus on price retrieval clearly distinguishes it from siblings like card_investment_advisor (advice/recommendations) and card_market_analysis (analytical insights) by focusing on raw data lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent implicit usage guidance through the listing_type parameter explanation ('sold' for true market value vs 'active' for asking prices), helping users select the right price dataset. However, it lacks explicit guidance on when to use this tool versus alternatives like card_market_analysis or card_investment_advisor.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses behavioral traits like partial name matching ('Partial names work') and specific stat categories covered (AVG, HR, RBI, etc.). However, it lacks critical operational details such as error handling (what happens if the player isn't found?) or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses a clear Args/Returns structure that frontloads the core purpose before diving into parameter specifics. While slightly formal/Python-docstring in style, every sentence adds value (stat categories listed, examples provided, return value summarized) with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter lookup tool that has an output schema, the description provides appropriate coverage: it documents both parameters with examples despite empty schema descriptions, and summarizes the return structure without needing to duplicate the full output schema. It appropriately bridges the stats and card domains given the sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given the schema has 0% description coverage, the description effectively compensates by providing rich examples for 'player_name' ('Shohei Ohtani', 'Aaron Judge') and noting that partial names work, which is crucial behavioral context. It also clarifies 'season' takes year values like 2025 and references the default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with specific verbs ('Look up') and resources ('MLB player stats'), and explicitly distinguishes from siblings like 'nfl_stats_lookup' and generic 'player_stats_lookup' by specifying 'MLB'. It also clarifies the card-market connection that links it to card-related siblings without overlapping entirely.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the MLB specificity implies baseball contexts, there are no explicit guidelines on when to use this versus 'card_market_analysis' or 'player_stats_lookup'. The description notes the tool provides card insights but doesn't clarify if this is the right tool for pure card pricing versus 'card_price_lookup'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It excellently documents the output behavior (arbitrage detection, spread calculation, actionable insights) but omits operational details such as data freshness, rate limits, error behaviors, or whether this performs real-time scraping versus cached data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The Args/Returns structure is clear and front-loaded. While verbose for a single-parameter tool, the detail is justified by the complex output. The examples and return value list earn their place by providing necessary specificity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and complex output, the description is complete. It compensates for the minimal schema, explains the parameter thoroughly, and details the comprehensive output. Minor gap: lacks explicit sibling differentiation guidelines.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage (the 'query' property lacks a description field), but the tool description fully compensates by providing detailed semantics: explicit examples ('2024 Panini Prizm...'), format guidance (include year, brand, player name), and success criteria ('for best results').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific action ('Full market analysis') on a specific resource ('sports card'), and details specific aspects (price trends, buy/sell spread, arbitrage). It distinguishes from sibling tools like card_price_lookup (simple lookup vs. comprehensive analysis) and card_investment_advisor (data analysis vs. investment advice) by scope and output depth.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the detailed return values imply this is for deep market analysis versus simple price checks, there is no explicit guidance on when to choose this over siblings like card_price_lookup or vintage_card_analysis. The distinction is left to inference.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adequately describes return values (Era classification, set significance, etc.) but lacks information about side effects, caching behavior, computational cost, or read-only nature that would be essential for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The docstring format with clear Args and Returns sections is well-structured and appropriately sized. The examples are necessary given the lack of schema documentation, though they consume space. Every sentence earns its place by clarifying scope or parameter usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and existence of an output schema, the description is sufficiently complete. It documents the lone parameter thoroughly despite empty schema coverage and summarizes outputs appropriately without needing to duplicate full output schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing the Args section with explicit formatting instructions ('Include year, brand, and player name') and four concrete examples showing the expected query syntax ('1952 Topps Mickey Mantle', etc.), which the empty schema cannot convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'Specialized analysis for vintage and classic sports cards (pre-2000)' with specific outputs including 'era-specific insights, condition sensitivity analysis, grade-based pricing, and investment outlook.' The pre-2000 constraint effectively distinguishes this from siblings like card_market_analysis and card_price_lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage boundaries through the 'pre-2000' specialization and vintage focus, but provides no explicit guidance on when to choose this over card_investment_advisor or card_market_analysis. The agent must infer this is for vintage cards only.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses behavioral scope ('Scans a curated watchlist' clarifies this is not exhaustive) and output structure ('Returns' section details trend scores and buying tips). It misses operational details like data freshness or caching behavior, but covers the essential behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally structured with a clear purpose statement followed by Args and Returns sections. Every sentence earns its place: the first establishes function, the second explains methodology (curated scan), and the structured sections document the single parameter and expected output without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 optional parameter) and the presence of an output schema, the description provides excellent completeness. It compensates for poor schema documentation via the Args section and voluntarily documents the return structure, leaving no critical gaps for an AI agent to invoke this tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage (the limit property lacks a description field), the Args section fully compensates by documenting the parameter's purpose ('Number of trending players to return'), constraints ('1-20'), and default value ('10'), providing complete semantic meaning beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get a list') and resource ('NBA players with breakout performances whose trading cards are likely rising in value'), and explicitly differentiates from generic player lookups via the 'curated watchlist of young stars' and 'breakout performances' scope, distinguishing it from sibling tools like player_stats_lookup or card_price_lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'breakout performances' and 'rising in value,' suggesting when to use this tool (identifying trending investments), but lacks explicit guidance on when to prefer alternatives like player_stats_lookup for general statistics or card_market_analysis for broader market trends.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description carries full burden and successfully discloses behavioral logic: it performs a comparative analysis (raw vs graded values), factors in marketplace fees and turnaround time, and returns a clear recommendation. Minor gap: doesn't mention potential limitations (e.g., data availability) or whether it performs real-time market queries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses efficient docstring format with Args and Returns sections. Front-loaded purpose statement followed by concise logic explanation. Every sentence provides value—no repetition of tool name or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete despite 0% schema coverage. Description covers input requirements, processing logic (what factors are compared), and summarizes return values (detailed ROI analysis with recommendation). Since output schema exists (per context signals), the Returns summary is appropriate without needing exhaustive field listing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring full compensation. Description excellently documents all three parameters: card_query includes examples and a critical constraint ('Do NOT include grading info'), grading_company lists valid options and default, expected_grade provides format examples and default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Calculate') + resource ('sports card') + specific scope (grading ROI comparison). Distinguishes from siblings like card_price_lookup (simple pricing) and card_market_analysis (trends) by specifying this evaluates the financial viability of the grading process itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Clearly establishes when to use (when deciding whether to pay for professional grading) and what factors it considers (costs, turnaround time, fees). Lacks explicit 'when not to use' guidance or named sibling alternatives, though the specific scope makes misuse unlikely.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/rjexile/sports-card-agent'
If you have feedback or need assistance with the MCP directory API, please join our Discord server