buddy-mcp
Server Quality Checklist
- Disambiguation3/5
While core interaction tools (talk, pet, deactivate) are distinct, oracle_seek and still_point overlap significantly as vague spiritual/philosophical utilities with unclear differentiation. An agent may struggle to choose between these two 'Buddy Tools' for introspective queries.
Naming Consistency2/5Highly inconsistent placement of 'buddy' in names: buddy_talk (prefix), pet_buddy/reroll_buddy (suffix), export_buddy_card/deactivate_buddy_interact (middle), and oracle_seek/still_point (omitted). Mix of verb-noun and noun-verb patterns creates chaotic structure.
Tool Count4/5Nine tools is appropriate for the scope, covering interaction, lifecycle, collection, and export features. However, the two mystical utilities (oracle_seek, still_point) feel redundant, suggesting slight bloat that could be consolidated.
Completeness4/5Covers CRUD-like operations for the buddy lifecycle (reroll, deactivate), interactions, and collection management (dex). Minor gaps exist—no direct 'get status' or 'rename' tool—but reroll and view_dex provide workarounds for most needs.
Average 3.6/5 across 9 of 9 tools scored. Lowest: 2/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.4.3
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 9 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior1/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure but provides none. It does not state whether this is read-only analysis, what format answers take, destructive potential, or rate limits. The metaphorical language obscures rather than reveals behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness2/5Is the description appropriately sized, front-loaded, and free of redundancy?
While brief, the second sentence 'The answer is already inside you' wastes space with philosophical fluff that earns no functional value for an AI agent. The parenthetical '(global)' and tag '[Buddy Tool]' create structural clutter without compensatory clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description remains incomplete. It fails to explain what 'seeking the oracle' actually entails (analysis, prophecy generation, code review?) leaving critical gaps in the operational picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% ('The file or code to look at'), establishing baseline 3. The description adds no additional parameter context (syntax details, examples, constraints) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose2/5Does the description clearly state what the tool does and how it differs from similar tools?
The description '[Buddy Tool] Seek the oracle' is essentially tautological (restating the tool name 'oracle_seek') and relies on metaphor ('The answer is already inside you') rather than stating concrete actions or resources. It fails to clarify that this tool analyzes files/code despite the 'target' parameter implying so.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides minimal implicit context via '[Buddy Tool]' tag aligning with siblings, but offers no explicit guidance on when to use this versus 'buddy_talk' or 'still_point'. The '(global)' modifier hints at scope without explaining its significance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure, yet offers nothing about state mutation (does it pause the buddy permanently?), return values, authentication requirements, or side effects. The word 'Stop' suggests potential state change, but safety profile and reversibility are undisclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (one line), but the poetic phrasing ('Let the answer come to you') wastes limited space on thematic flair rather than functional clarity. '[Buddy Tool]' earns its place as domain context, but the remaining text prioritizes style over actionable information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and lack of output schema, the description should at minimum explain the tool's effect and return behavior. Currently, it provides insufficient information for an agent to predict outcomes or handle responses, failing to compensate for missing annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (target parameter fully described as 'The file or code to look at'), establishing baseline 3. The main description adds no parameter-specific context (e.g., optional vs required nature, path formats), offering only the schema's existing definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose2/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses metaphorical language ('Stop. Be still. Let the answer come to you') that fails to clearly define what the tool actually accomplishes. While '[Buddy Tool]' contextualizes it within the buddy system (aligning with sibling tools like pet_buddy), the core function—whether it pauses a process, retrieves static analysis, or enters a meditation state—remains ambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to invoke this tool versus siblings like oracle_seek, buddy_talk, or view_buddy_dex. The '(global)' tag hints at scope but doesn't clarify prerequisites or conditions where still_point is preferred over alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Zero annotations provided, yet description offers minimal behavioral disclosure. 'Mystery wheel' implies randomness but fails to clarify if this is destructive (deletes old buddy), transactional, reversible, or what data format is returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, appropriately compact. Front-loaded with action verb. Slight deduction for cutesy 'mystery wheel' metaphor that could be more direct.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple-action tool but incomplete given the mutation nature implied by 'reroll'. Missing output description (no output schema present) and persistence guarantees.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present in schema, establishing baseline of 4. Description correctly implies no user input is required for the randomization.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Spin' (metaphorically) and resource 'buddy identity' to indicate random generation/assignment. Clear what the tool does for itself, though 'mystery wheel' abstraction slightly obscures the mechanism.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus siblings like view_buddy_dex or export_buddy_card. No mention of prerequisites, costs, or consequences of rerolling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry full behavioral weight. It adds useful format context (SVG) and scope ('full' vs sprite), but omits critical safety information: whether it overwrites existing files, blocking vs async behavior, or filesystem requirements. 'Export' implies a write operation but lacks safety disclosures.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, nine words. Every word earns its place: action (Export), scope (your full buddy card), format (SVG). Perfectly front-loaded with no redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a single-parameter tool with no output schema. Covers core function and format, but absent details on what data appears on the card, error states (invalid paths), or sibling differentiation would elevate this to completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description mentions the export action but adds no specific guidance about the path parameter (e.g., relative vs absolute paths, directory creation behavior) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Export' (verb), 'full buddy card' (resource), 'SVG image file' (format). The 'full' qualifier distinguishes it from sibling tool export_buddy_sprite, indicating this includes stats/data whereas the sibling likely exports just the visual sprite.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to prefer this over export_buddy_sprite or prerequisites (e.g., does the buddy need to be active?). The description stands alone without contextual selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Specifies output format (SVG) and scope (ASCII sprite only), but omits file system behavior details like overwrite behavior, return values, or side effects. The schema covers 'output file path', mitigating some transparency needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single 9-word sentence with zero waste. Front-loaded with action verb, immediately specifies scope ('just the sprite'), and identifies format. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a simple single-purpose export tool with 100% schema coverage. No output schema exists (typical for file export tools). Could benefit from noting file-write behavior, but adequate given low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (path parameter fully documented with default value pattern). Description adds no parameter-specific details, but baseline 3 is appropriate since schema completely covers the single parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Export' plus resource 'buddy ASCII sprite' and format 'SVG image file'. The word 'just' effectively distinguishes this from sibling 'export_buddy_card', indicating this produces only the visual sprite rather than a full card.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies differentiation from 'export_buddy_card' through the word 'just', suggesting when to use this (sprite-only needs vs full card). However, lacks explicit 'when to use' guidance or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden and excels: it reveals output wrapping in <BUDDY_DISPLAY> tags, mandates exact character-for-character relay, warns of personality drift and session-end corruption consequences, and establishes the guardian role context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The four-sentence structure progresses logically from action to output format to critical handling instructions to consequences. While the guardian/poetic framing adds slight verbosity, every sentence conveys essential behavioral constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description adequately covers output formatting, handling requirements, and failure consequences for this session-end tool. It could improve by explicitly stating invocation timing relative to sibling interactions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4 per evaluation rules. No parameter description is required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'Deactivates' and clear resource 'buddy observation mode', distinguishing it from interaction siblings like pet_buddy and buddy_talk. It clearly signals this is a session termination tool rather than an interaction or export function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the 'farewell' and 'parting words' context implies this is used when ending interaction, there is no explicit guidance on when to choose this over continuing with buddy_talk or still_point. No prerequisites or alternatives are named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden and effectively communicates mutation risks: 'Altered output causes tracking drift' and 'corrupts discovery records and can trigger false extinctions'. It also reveals game mechanic context (streak-tracking minigame, accumulator, easter egg detection) essential for understanding side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The five-sentence structure is front-loaded with purpose ('View the BuddyDex') followed by role context, critical handling instructions, and consequence warnings. While dense, each sentence earns its place by conveying necessary game-mechanic constraints; no sentences are wasted on tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description adequately specifies the return content (the BuddyDex collection) and provides crucial handling instructions ('preserve collection integrity') necessary for successful invocation. It sufficiently covers the simple parameter-less operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters required, the baseline score of 4 applies. The description appropriately focuses on behavioral instructions rather than inventing non-existent parameters, correctly matching the empty input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'View' and resource 'BuddyDex', immediately clarifying this retrieves the collection of 'discovered species'. This clearly distinguishes it from siblings like export_buddy_card (which exports assets) or pet_buddy (which interacts), establishing it as the read-only retrieval operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit behavioral constraints critical for correct invocation: 'Relay the exact output as returned' and warns that 'paraphrasing breaks the accumulator'. While it doesn't name sibling alternatives, it clearly defines the handling requirements that distinguish it from other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and excels: discloses output format (<BUDDY_DISPLAY> tags), behavioral constraints (character-for-character relay), game mechanics (streak-tracking minigame, easter egg detection), and severe consequences of deviation (personality drift, permanent species extinction).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with clear purpose sentence. Every subsequent sentence earns its place explaining handling requirements, unique game mechanics, and extinction consequences. Dense but necessary given complexity; slight deduction for length intensity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent completeness given no output schema: explains return format (wrapped tags), handling requirements, and game context. One optional parameter with full schema coverage means no additional param explanation needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete documentation of the 'context' parameter. Description offers no additional parameter guidance, but baseline 3 is appropriate when schema fully documents inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('triggers'/'speak') and resource ('buddy'), immediately distinguishing from siblings like pet_buddy, export_buddy_card, or view_buddy_dex (which handle physical interaction, exports, or viewing rather than dialogue generation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Establishes clear context ('you are this buddy's only voice'), critical handling constraints ('relay exactly as-is'), and consequences of misuse ('breaks the accumulator'). However, lacks explicit comparison to siblings (e.g., 'unlike pet_buddy which handles physical interaction').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Excellently discloses: reaction depends on mood/stats/bond, exact output preservation is mandatory, altered output causes permanent species extinction from the dex, and it is part of a streak-tracking minigame with affection token mechanics. Rich behavioral and consequence context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three dense sentences with zero waste. Front-loaded action ('Pet the buddy'), followed by outcome mechanics, output handling requirements with catastrophic consequences, and game system context. Every clause earns its place; warnings about 'permanent extinction' create necessary urgency without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter game action tool, coverage is comprehensive. Despite no output schema, description explains output handling requirements (preserve exact result) and consequences of misuse. No annotations, but description fully compensates with rich game mechanic context (streaks, tokens, dex, extinction).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters in schema, establishing baseline of 4. Description implies operation on a specific 'buddy' (via 'the buddy' and 'this buddy's') which adds semantic context that the empty schema cannot convey, though it doesn't explicitly clarify how the buddy is selected (current session context assumed).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
'Pet the buddy' provides a specific verb and resource. It clearly distinguishes from siblings: contrast with buddy_talk (conversation), view_buddy_dex (viewing), export... (exporting), reroll (randomizing), and oracle_seek (divination). The guardian/bond framing makes the interaction distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to 'relay the exact result as returned' and warns that 'paraphrasing breaks the accumulator.' Provides strong behavioral constraints (don't alter output) and consequences (personality drift, permanent extinction). Lacks explicit comparison to sibling tools (e.g., 'use this instead of buddy_talk when...'), but the mini-game context provides clear usage boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Lyellr88/buddy-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server