Server Quality Checklist
- Disambiguation5/5
Each tool has a distinct, non-overlapping purpose. The three score variants (basic, full, refresh) are clearly differentiated by granularity and intent, while check_blacklist, report_fraud, get_badge, and register_agent target completely different resources with no ambiguity.
Naming Consistency4/5Most tools follow a clear verb_noun pattern (check_blacklist, get_badge, register_agent) or a consistent domain prefix pattern (score_basic, score_full, score_refresh). The only deviation is health_check, which reverses the pattern used by check_blacklist and should ideally be check_health.
Tool Count5/5Nine tools is an ideal scope for an agent reputation service, covering the full lifecycle: registration, scoring (basic/full/refresh), fraud reporting/checking, discovery (leaderboard), visualization (badge), and system status. No tool feels redundant or superfluous.
Completeness4/5The surface covers core workflows well: CRUD-like operations for scores and fraud reports, registration, and discovery. Minor gaps exist: no update_agent to modify registered metadata after creation, and no delete/deregister capability, though agents can work around these limitations.
Average 4.2/5 across 9 of 9 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 9 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint and destructiveHint, the description adds valuable behavioral context: the return structure ('Array of { wallet, score, tier }'), the ranking nature ('ranked list'), and the cost model ('FREE'). It does not contradict any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, returns, cost, args) and front-loaded with the core action. It is slightly redundant with the input schema regarding the limit parameter, but the structure facilitates LLM parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single optional parameter) and absence of an output schema, the description adequately compensates by documenting the return structure textually. Combined with rich annotations covering safety and idempotency, this provides sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the limit parameter, the baseline is 3. The description repeats the schema's description ('Maximum number of entries to return (1-100)') in an Args section without adding additional semantic context like default behavior or pagination strategy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Get the leaderboard of top-scored AI agent wallets,' providing a specific verb and resource. It clearly distinguishes from siblings like score_basic, score_full (which calculate scores) and register_agent (which creates records) by focusing on retrieval of existing rankings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes 'This is a FREE endpoint,' providing operational context about cost constraints. However, it lacks explicit guidance on when to use this versus alternatives like get_badge or score_full, or when pagination might be needed versus the limit parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical cost information ('FREE') not present in annotations. Aligns well with annotations (idempotentHint=true supports safe re-registration). Could explicitly mention behavior on duplicate wallet registration.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose. Includes Args section (redundant with schema) but necessary to document Returns shape since no output schema exists. 'FREE' is high-value early information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Good coverage given idempotent/destructive annotations and described return shape {success, message}. Addresses cost and safety profile. Minor gap: doesn't clarify if registration is required before using scoring siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline applies. Description frames parameters as 'metadata' which adds conceptual grouping, but the Args section largely repeats schema descriptions without adding syntax details or examples beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Register'), resource ('AI agent wallet'), and scope ('with metadata'). Distinct from scoring/checking siblings (score_basic, check_blacklist, etc.) as a setup/onboarding operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides cost context ('FREE endpoint') but lacks explicit workflow guidance (e.g., 'use this before score_basic') or prerequisites. No mention of when NOT to use or alternatives to registration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations indicate this is a non-read-only, non-destructive, non-idempotent operation, the description adds crucial behavioral context: the specific cost ($0.02 USD) and the return structure ({ success, reportId, message }). It does not disclose what happens after submission (e.g., review process, immediate blacklist status).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with clear sections: purpose statement, payment warning, Args list, and Returns documentation. Every sentence serves a distinct purpose; there is no redundant or filler text despite covering multiple aspects (function, cost, inputs, outputs).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a paid mutation endpoint with 3 required parameters and no output schema, the description adequately covers the essential ground: input requirements, payment obligation, and return value structure. It could be improved by explaining report lifecycle (e.g., 'creates a pending report for review') but is sufficient for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured schema already documents all parameters thoroughly (patterns, examples, constraints). The description's Args section repeats this information with nearly identical semantics ('suspected fraudster' adds slight intent context), meeting the baseline expectation when the schema carries the descriptive burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Submit), target resource (fraud report), and required inputs (wallet, transaction hashes, evidence). It clearly distinguishes from sibling tools like check_blacklist (which queries existing data) by being the only submission/mutation tool for fraud.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical usage context by explicitly stating this is a 'PAID endpoint — requires x402 payment ($0.02 USD)', which is essential for an agent to prepare payment headers. However, it does not specify when to use this versus check_blacklist or whether to verify the wallet status before reporting.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnly, non-destructive, idempotent). Description adds critical billing context ('FREE endpoint') and detailed return payload structure (score ranges, tiers, freshness) that annotations don't provide. Does not mention rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (purpose, billing, args, returns, examples). Front-loaded with primary action. Minor redundancy between opening paragraph and Returns section listing fields, but acceptable for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool, description comprehensively covers input validation, output structure (compensating for missing output schema), billing implications, and usage examples. No gaps given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with regex pattern and description. The Args section repeats schema information ('Ethereum wallet address (0x + 40 hex chars)') without adding semantic nuances like 'must be a deployed agent contract' or validation beyond the pattern.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Get' + resource 'basic reputation score' + target 'AI agent wallet on Base'. The modifier 'basic' effectively distinguishes it from sibling 'score_full' and 'score_refresh'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete natural language examples mapping queries to tool use. Explicitly notes 'FREE endpoint — no x402 payment required,' implying cost-based selection criteria. Lacks explicit comparison to 'score_full' for when to prefer one over the other.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnlyHint, destructiveHint, idempotentHint), but the description adds critical behavioral context: the payment requirement ($0.05 USD) and the return structure ({ wallet, reported, reportCount, reports[] }). Since no output schema exists, documenting the return format is valuable added transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, cost warning, args, returns) and front-loaded with the core function. Every sentence serves a distinct purpose—cost warning prevents accidental invocation, and returns documentation compensates for missing output schema. No redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a single-parameter read operation with rich annotations covering safety properties, the description is complete. It appropriately compensates for the missing output schema by detailing the return object structure, and includes the critical payment context necessary for successful invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description lists 'wallet (string): Ethereum wallet address to check' which largely repeats the schema's description ('Ethereum wallet address (e.g. 0xAbC...123)'). It does not add semantic details beyond what the schema's pattern and description already provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The first sentence 'Check if a wallet has any fraud reports filed against it' provides a specific verb (Check), resource (fraud reports), and target (wallet). It clearly distinguishes from sibling 'report_fraud' by specifying this is a read/check operation rather than a write/submit operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides crucial usage constraints by noting it is a 'PAID endpoint — requires x402 payment ($0.05 USD)', which informs the agent about cost prerequisites. However, it does not explicitly contrast with sibling 'report_fraud' (e.g., 'use this to verify before reporting') or specify when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, idempotent behavior. The description adds valuable cost information ('FREE endpoint') and clarifies the return structure includes both the URL and 'raw SVG content', which is not inferable from annotations alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear sections: purpose, usage context, cost, args, and returns. Front-loaded with the core action. No wasted words; every sentence delivers specific value regarding functionality, cost, or integration method.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of a formal output schema in the structured fields, the description adequately documents the return object structure '{ badgeUrl, svg }'. Combined with good annotations and single-parameter simplicity, the description provides sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description repeats the wallet parameter ('Ethereum wallet address') but adds no semantic depth beyond what the schema already provides (which includes the pattern and example format).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The opening sentence 'Get the embeddable SVG badge URL for a wallet's reputation score' provides a specific verb (Get), resource (SVG badge URL), and scope (wallet reputation). It clearly distinguishes from siblings like score_basic and score_full by emphasizing the embeddable badge aspect rather than raw score data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit context that the URL 'can be embedded in markdown, HTML, or any context that supports images' and notes 'This is a FREE endpoint' (cost guidance). However, it lacks explicit guidance on when to use this versus score_full/score_basic for fetching actual score data rather than visual badges.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, idempotent behavior. The description adds valuable cost context ('FREE endpoint') and documents the return structure ({status, version, uptime}), compensating for the missing output schema without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly structured sentences: action definition, cost characteristic, and return value documentation. No redundancy or wasted words; information is front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter health check with comprehensive annotations, the description is complete. It documents the return payload (compensating for lack of output schema) and operational characteristics (free) necessary for an agent to invoke the tool confidently.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters and 100% schema coverage, warranting the baseline score of 4. No additional parameter semantics are needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Check') and clearly identifies the resource ('DJD Agent Score API system status'), distinguishing it from siblings like score_basic or get_leaderboard which perform business logic rather than system monitoring.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'FREE endpoint' note provides implicit guidance about cost-free usage, but the description lacks explicit guidance on when to use this versus other tools (e.g., 'use before calling score endpoints to verify connectivity') or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is not read-only (readOnlyHint: false) and interacts with external systems (openWorldHint: true). The description adds critical behavioral context not present in annotations: the $0.25 USD payment requirement via x402, and the specific return structure (score, tier, confidence, etc.) despite the absence of a formal output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently organized with clear visual hierarchy: purpose statement, usage trigger, cost warning, and return documentation. Every line serves a distinct function—no filler text. The front-loading of the core action ('Force a re-score') followed by conditions and constraints is optimal for agent parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (paid endpoint, on-chain dependency) and lack of output schema, the description compensates perfectly by documenting the exact return object structure. Combined with complete schema coverage for inputs and explicit cost disclosure, the description provides sufficient context for correct invocation and response handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is appropriately 3. The description lists the wallet parameter but essentially mirrors the schema's pattern specification ('0x + 40 hex chars' vs schema's regex pattern). It adds no additional semantic context (e.g., no examples of valid/invalid addresses or usage notes) beyond what the structured schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-noun combination ('Force a re-score of an AI agent wallet') that precisely defines the operation. It distinguishes from siblings like 'score_basic' and 'score_full' by emphasizing the 'refresh' aspect—using 'latest on-chain data' and targeting 'cached' scores—making the unique value proposition clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal triggers ('when you suspect the cached score is stale', 'after known on-chain activity') that clearly indicate when to invoke this tool versus simply reading existing scores. The guidance implicitly defines when NOT to use it (avoid unnecessary paid calls when data is fresh).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare read-only/idempotent safety, the description adds essential behavioral context: the x402 payment model, versioning details ('v2.3+'), specific dimension semantics (what contributes to identity/capability scores), and return value structure (improvementPath, scoreHistory). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear front-loading of purpose followed by detailed return value breakdowns. The detailed dimensional data is necessary given the lack of output schema, though the Args/Returns sections at the bottom partially duplicate earlier content. Every section serves a distinct purpose (purpose, data specification, payment warning, parameter reminder).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage compensating for the absence of an output schema. Documents all return fields (dimensions, sybilFlag, gamingIndicators), payment prerequisites, and sibling differentiation. For a complex, paid endpoint with rich nested return data, the description provides sufficient information for correct invocation and response handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents the 'wallet' parameter (type, pattern, example). The description repeats this information ('Ethereum wallet address (0x + 40 hex chars)') without adding significant semantic context about the parameter's usage or constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and clearly identifies the resource ('full reputation score with dimension breakdown for an AI agent wallet'). It explicitly distinguishes itself from sibling tool 'score_basic' by stating 'Returns everything from basic score PLUS...', making the scope unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage constraints including the critical payment requirement ('PAID endpoint — requires x402 payment ($0.10 USD)') and retry logic. It clearly positions the tool against alternatives by contrasting with 'basic score' and detailing when the rich dimensional data is necessary versus overkill.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.