Skip to main content
Glama

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Latest release: v1.3.0

  • Disambiguation5/5

    With only one tool, there is no possibility of ambiguity or overlap between tools. The tool has a single, clear purpose: retrieving information from a knowledge base.

    Naming Consistency5/5

    Since there is only one tool, naming consistency is inherently perfect. The tool name 'knowledge-base-retrieve' follows a clear verb-noun pattern with hyphens, which is a valid and consistent convention.

    Tool Count2/5

    A single tool is too few for a server named 'Agentset', which suggests a broader set of agent-related functionalities. The tool covers only retrieval, lacking operations like adding, updating, or deleting knowledge base content, which limits its scope and utility.

    Completeness2/5

    The server is severely incomplete for a knowledge base domain. It only provides retrieval, missing essential CRUD operations such as create, update, or delete. This gap will likely cause agent failures when full lifecycle management is needed.

  • Average 3.6/5 across 1 of 1 tools scored.

    See the Tool Scores section below for per-tool breakdowns.

    • No issues in the last 6 months
    • No commit activity data available
    • Last stable release on
    • No critical vulnerability alerts
    • No high-severity vulnerability alerts
    • No code scanning findings
    • CI is passing
  • This repository is licensed under MIT License.

  • This repository includes a README.md file.

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • If you are the author, simply .

    If the server belongs to an organization, first add glama.json to the root of your repository:

    {
      "$schema": "https://glama.ai/mcp/schemas/server.json",
      "maintainers": [
        "your-github-username"
      ]
    }

    Then . Browse examples.

  • Add related servers to improve discoverability.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description fully carries the burden of disclosing behavior. It only describes basic functionality without addressing side effects (none expected but not stated), read-only nature, error handling, or performance characteristics. For a retrieval tool, the description should at least imply idempotency or lack of mutations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is concise and efficiently uses a bulleted list for clarity. The opening sentence is slightly generic, but overall it is well-structured and not verbose. Every line adds value.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    The description covers common use cases but lacks information about the return format (no output schema) and potential errors. For a simple retrieval tool, it is partially complete, but missing output schema details and edge case handling reduces completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, so baseline is 3. The description adds no additional detail beyond the schema's parameter descriptions. It does not explain the implications of rerank or topK further, nor does it provide usage examples for parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose: 'Look up information in the Knowledge Base.' It provides a specific verb ('retrieve') and resource ('Knowledge Base'), and lists concrete use cases like finding documents, policies, and product specs. Since there are no sibling tools, no differentiation is needed.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explicitly instructs when to use the tool via a bulleted list of scenarios (e.g., 'Find relevant documents,' 'Access product specifications'). It provides clear context but does not mention when not to use it or alternatives, which is less critical given no sibling tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

mcp-server MCP server

Copy to your README.md:

Score Badge

mcp-server MCP server

Copy to your README.md:

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agentset-ai/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server