Skip to main content
Glama

Server Quality Checklist

42%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v1.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 4 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Mentions pagination behavior, but without annotations, the description fails to disclose other critical traits: it doesn't confirm the read-only nature (though implied by 'list'), describe the return structure, or explain error conditions given the lack of output_schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at only 5 words. While efficient in word count, this brevity contributes to underspecification given the lack of supporting annotations or output schema. No filler or redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Severely incomplete given zero annotations and no output_schema. The description should compensate by describing what the tool returns (document structure, total count, etc.) and confirming read-only safety, but it provides only the minimal action verb.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema descriptioncoverage is 100%, establishing a baseline of 3. The description mentions 'pagination' which loosely maps to the 'page' parameter object, but adds no syntax guidance, format details, or constraint explanations beyond what the schemaalready provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States the basic action (list documents) and mentions pagination, but provides minimal scope definition. Does not explicitly distinguish from the 'search' sibling tool, which also retrieves documentsbut likely with filtering capabilities.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Contains no guidance on when to use this tool versus alternatives like 'search' (for filtered queries) or 'upsert_documents'. Missing conditions or prerequisites for invocation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but offers only the word 'Delete'. It fails to specify whether deletions are permanent or soft, what occurs if IDs don't exist, or how the tool handles the case where zero required parameters means both ID arrays are optional (which could imply bulk-deletion behavior).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently compressed into six words with the action verb front-loaded. Every word serves a purpose with no redundant filler, making it appropriately sized for the tool's scope.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a destructive operation with no output schema and no annotations, the description is incomplete. It omits critical behavioral context such as return values, error handling for non-existent IDs, the consequence of calling with zero parameters (since none are required), and whether deletions are atomic.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, documenting both 'doc_ids' and 'external_ids' as arrays for deletion. The description merely restates these parameter purposes ('by ID or external ID') without adding semantic value regarding format constraints, validation rules, or the implications of providing both versus neither parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a clear verb ('Delete') and resource ('documents') with specific identifier types ('ID or external ID'). It distinguishes from siblings like 'list_documents' and 'search' (read operations) and 'upsert_documents' (create/update) through the destructive verb.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives, nor does it warn about the destructive nature or prerequisites like confirmation requirements. It fails to clarify the relationship with 'upsert_documents' despite both being mutation operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description must carry full behavioral disclosure. While 'Insert or update' signals mutation, it fails to explain the upsert key logic (what field determines if it updates vs inserts), whether updates are partial or full overwrites, or potential failure modes.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely brief at eight words with no redundancy. The phrase 'with text content' front-loads the essential requirement. However, given the lack of annotations, the extreme brevity leaves critical behavioral information unstated.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Minimum viable given good schema coverage. However, with no annotations and no output schema, the description should explain the upsert matching behavior and return values. It leaves significant gaps for a mutation tool with sibling search functionality.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage, the structured schema already documents all parameters adequately. The description mentions 'text content', which aligns with the required 'text' field, but adds minimal semantic value beyond what the schema provides. Baseline 3 appropriate.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states a specific action ('Insert or update') and resource ('documents'), and the 'upsert' terminology distinguishes it from siblings delete_documents, list_documents, and search. However, it doesn't clarify the scope or matching logic (e.g., when it inserts vs updates).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (like external_id for updates) or when to prefer delete_documents or search over this mutation tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description must carry full behavioral burden. It successfully discloses the semantic/vector similarity mechanism but fails to describe return format, result ranking logic, or whether the search is approximate/exact. No mention of latency implications for semantic search.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence with no redundancy. Front-loaded with primary action. However, extreme brevity leaves gaps in contextual information (resource type, return values) that a slightly longer description could address.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Lacking output schema, the description should ideally characterize results. Omits explicit mention that this searches 'documents' (the apparent domain from siblings) and gives no hint about result structure or scoring. Adequate minimum but clear gaps remain.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100% (all 3 parameters well-documented), establishing baseline 3. The description adds no additional parameter guidance, but none is needed given comprehensive schema coverage including the nested filter.doc_ids structure.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States the action (search) and mechanism (semantic) clearly, and distinguishes from 'list_documents' sibling by specifying 'similar content' via semantic search. However, it fails to specify the target resource (documents), which must be inferred from sibling tool names.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus siblings. Does not clarify when semantic search is preferred over 'list_documents' filtering, nor mention any prerequisites like existing document embeddings.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

PocketMCP MCP server

Copy to your README.md:

Score Badge

PocketMCP MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Kailash-Sankar/PocketMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server