Skip to main content
Glama
peterbeck111

knowledgelib-mcp

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools are mostly distinct with clear purposes. The only potential overlap is between batch_query and query_knowledge (both perform searches), but descriptions clarify that batch_query is for efficiency when searching multiple topics, while query_knowledge is the standard single search entry point. Other tools like report_issue (quality flags) and suggest_question (new content requests) have clearly separated concerns.

    Naming Consistency5/5

    All six tools follow a consistent verb_noun pattern using snake_case: batch_query, get_unit, list_domains, query_knowledge, report_issue, suggest_question. Action verbs (batch, get, list, query, report, suggest) are used predictably with clear target nouns.

    Tool Count5/5

    Six tools is an appropriate, well-scoped count for a knowledge retrieval server. The set covers discovery (list_domains), retrieval (query_knowledge, batch_query, get_unit), and feedback loops (report_issue, suggest_question) without bloat or redundancy.

    Completeness4/5

    The surface covers the essential knowledge retrieval lifecycle: domain discovery, flexible search (single and batch), specific unit retrieval, and feedback mechanisms for both corrections and new content requests. Minor gaps include no domain-specific browsing tool and the odd 'STEP 1/STEP 3' labeling suggesting a missing intermediate step, but core workflows are supported.

  • Average 4.3/5 across 6 of 6 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.3.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 6 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations cover the safety profile (readOnly=false, destructive=false, idempotent=false). The description adds valuable lifecycle context ('Reports are reviewed and used to prioritize content updates'), but does not elaborate on side effects, persistence behavior, or what the caller should expect after submission (e.g., confirmation of receipt).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste: purpose (sentence 1), usage triggers (sentence 2), and post-submission behavior (sentence 3). Information is front-loaded and every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the 5 parameters with full schema coverage and no output schema, the description adequately covers the tool's purpose, invocation triggers, and downstream workflow. It could be improved by noting whether the operation is synchronous or if it returns a report ID, but it is sufficient for agent selection.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema carries the heavy lifting for parameter semantics. The description maps general concepts ('incorrect, outdated, or broken') to the tool's domain but does not add syntax details, validation rules, or usage examples beyond what the schema already provides, warranting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Flag') and clear resource ('content on a knowledge unit'), explicitly stating the tool's function. It effectively distinguishes this tool from retrieval-oriented siblings like get_unit or query_knowledge by focusing on error reporting rather than data access.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit positive guidance ('Use this when you notice factual errors, dead links...') that clearly scopes when to invoke the tool. However, it lacks explicit negative guidance or named alternatives (e.g., 'Do not use for general questions; use query_knowledge instead'), which would earn a 5.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=true and idempotentHint=true, covering the safety profile. The description adds useful workflow context (discovery phase before querying) and hints at return content (unit counts), but omits details about pagination, caching, or response format that would help the agent handle the output.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. The first front-loads the action and resource; the second provides usage context. Every word earns its place with no redundancy or tautology.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (no parameters, read-only annotations) and lack of output schema, the description adequately covers the essential information: what it returns (domains with unit counts) and why to use it (discovery). It appropriately compensates for missing output schema by describing the payload content.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With zero parameters, the baseline score per rules is 4. The schema is empty (100% coverage of nothing), and the description appropriately focuses on behavior rather than inventing parameter documentation where none exists.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a specific verb ('List'), clear resource ('knowledge domains'), and scope detail ('with unit counts'). It effectively distinguishes this discovery tool from siblings like query_knowledge and batch_query by emphasizing the enumeration of available topics.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The second sentence ('Use this to discover what topics are covered before querying') provides clear context for when to invoke the tool relative to sibling query tools. However, it could be strengthened by explicitly naming the query siblings (query_knowledge, batch_query) rather than using the generic term 'querying'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint and idempotentHint. The description adds valuable behavioral context about the return value format ('full raw markdown with YAML frontmatter, inline source citations...') that annotations do not cover. Does not mention error cases or rate limits, preventing a 5.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: first states purpose, second details return format. Front-loaded with the core action. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple single-parameter retrieval tool, the description is complete. It compensates for the missing output schema by detailing the return format (markdown structure, content types). Combined with complete annotations and full schema coverage, no gaps remain.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage for the single 'unit_id' parameter, the schema carries the semantic burden. The description mentions 'by ID' but does not add syntax details or usage examples beyond what the schema already provides, meeting the baseline for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb ('Retrieve') + resource ('knowledge unit') + exact scope ('by ID'). The 'specific...by ID' phrasing clearly distinguishes it from sibling tools like 'query_knowledge' (search) and 'batch_query' (bulk operations).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage context through 'specific knowledge unit by ID,' signaling this is for exact lookups rather than searches. However, it does not explicitly name sibling alternatives (e.g., 'use query_knowledge for searches') or state when not to use it.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Adds valuable performance context beyond annotations ('shares a single catalog parse') explaining the efficiency mechanism. States batch limit constraint. Does not contradict readOnly/idempotent annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences, each high-value: purpose statement, efficiency rationale with sibling comparison, and operational constraint. No filler text.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Strong coverage for a read-only batch tool with complete schema annotations. Minor gap: no output schema exists, and description does not clarify return structure (e.g., results grouping), though this is somewhat implied by 'batch_query' naming.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage, so baseline applies. Description mentions 'Max 10 queries' reinforcing the constraint but does not add semantic meaning to individual query parameters (q, domain, etc.) beyond what's in schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Search' + resource 'topics' clearly stated. Explicitly distinguishes from sibling 'query_knowledge' by contrasting single-call vs multiple calls.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when to use ('More efficient than calling query_knowledge multiple times') and names the alternative tool. Includes operational constraint ('Max 10 queries per batch') guiding usage limits.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare read-only/idempotent/open-world properties, so description focuses on adding return structure details ('ranked by relevance', 'confidence scores, source counts, token estimates') and workflow sequence. Does not contradict annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three tightly constructed sentences with zero waste: workflow position and action, return format specification, and error-handling guidance. Front-loaded with the critical 'STEP 1' designation.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite lacking an output schema, the description compensates by detailing the return metadata structure. Combined with 100% input schema coverage and complete annotations, this provides sufficient context for a search tool, though it could explicitly highlight the filtering capabilities (domain, region, jurisdiction) present in the schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, providing complete documentation for all 6 parameters (query, domain, region, jurisdiction, entity_type, limit). Description implies the query parameter through the search verb but adds no syntax details beyond the schema, warranting the baseline score of 3.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb ('Search') + resource ('knowledgelib.io knowledge units') combination clearly defines the scope. Explicitly distinguishes from sibling 'suggest_question' by stating when to use that alternative instead.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow positioning ('STEP 1') and clear fallback instruction ('If no results are found, use suggest_question'). Names the specific alternative tool to invoke in failure cases.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare idempotentHint=true and non-destructive write behavior. The description adds valuable business logic: 'Popular suggestions are prioritized for new knowledge unit creation' and 'The next agent that asks the same question will get an answer', explaining the long-term effect. Does not contradict annotations (submit/write aligns with readOnlyHint=false).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Front-loaded with 'STEP 3' workflow indicator. Four sentences each earning their place: (1) action definition, (2) trigger conditions, (3) business logic/prioritization, (4) future effect. No redundant or wasted language.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 3-parameter submission tool without output schema, the description adequately covers workflow position, triggering conditions, and downstream effects (future agent availability). Minor gap: does not describe the immediate return value (e.g., confirmation ID or success status).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100% with clear examples for each parameter (question, context, domain). The description mentions 'question or topic request' aligning with the question parameter, but does not add semantic guidance beyond what the fully-documented schema already provides, warranting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb 'Submit' with resource 'question or topic request' to target 'knowledgelib.io'. It clearly distinguishes from sibling 'query_knowledge' by positioning this as the fallback when querying returns no results, clarifying its role in the knowledge acquisition workflow.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states 'ALWAYS call this when query_knowledge returned no results' and 'when a user asks about a topic that should be covered', providing clear when-to-use conditions and implicitly referencing the alternative tool (query_knowledge) for the primary path.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

knowledgelib-io MCP server

Copy to your README.md:

Score Badge

knowledgelib-io MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/peterbeck111/knowledgelib-io'

If you have feedback or need assistance with the MCP directory API, please join our Discord server