Skip to main content
Glama

Server Quality Checklist

50%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 4 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It adds valuable context by specifying the data source ('community feedback data'), but omits other behavioral details such as read-only guarantees, pagination behavior, or rate limiting constraints.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately front-loaded with the core purpose in the first sentence, followed by behavioral context, then parameter details. The structure is efficient with no wasted words, though the Args section formatting is slightly informal.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (2 flat parameters) and lack of output schema, the description provides adequate coverage of inputs and data source. However, it lacks description of return format or pagination, which would be expected for a list-returning tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage (properties lack 'description' fields), the description compensates effectively by documenting both parameters: 'task' is explained as a 'Filter by task description keyword' and 'limit' as 'Max results (default: 10)', clarifying semantics and defaults beyond the schema titles.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Get') and resource ('highest-rated MCP tools') with scope ('optionally filtered by task type'). While it uses distinct terminology ('highest-rated') that implicitly differentiates from siblings like 'get_trending_tools', it does not explicitly contrast usage against them.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explains how to filter results ('optionally filtered by task type') but provides no explicit guidance on when to select this tool versus 'get_tool_quality' or 'get_trending_tools'. There are no stated prerequisites or exclusion criteria.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the ranking logic (usage + feedback) but omits whether results are cached, paginated, or what occurs if the lookback period exceeds available data. It also does not describe the return structure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Front-loaded with clear purpose statement. Uses docstring-style Args section which is slightly informal for MCP but efficient given parameter simplicity. No wasted words, though the param documentation could be redundant with schema if schema were properly documented.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for the input complexity (2 optional integers) but incomplete regarding output. No output schema exists, yet the description only hints at results with 'Shows which tools...' without clarifying the data structure, fields returned, or whether empty results return [] or error.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0% (titles only, no descriptions). The Args section documents both parameters sufficiently: 'days' is explained as the lookback period with default noted, 'limit' as max results with default noted. This compensates for the schema's lack of descriptions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb (Get) + resource (trending MCP tools) + ranking criteria (recent usage and feedback). Second sentence clarifies it surfaces both usage volume and ratings. However, it does not explicitly differentiate from sibling 'get_best_tools' which likely overlaps in functionality.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this versus 'get_best_tools' or 'get_tool_quality', nor does it mention prerequisites or exclusions. The agent must infer that 'trending' implies time-based popularity while 'best' implies overall quality.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively compensates by specifying exactly what the tool shows: 'success rate, average quality score, and recent feedback.' However, it omits details about data freshness, caching, or any rate limiting.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently organized into three distinct parts: purpose, return value details, and argument documentation. However, the 'Args:' formatting is slightly informal/awkward compared to natural language integration, preventing a perfect score.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the low complexity (single string parameter, no nested objects) and lack of output schema, the description adequately covers the essential information: what the tool does, what it returns, and what input is required. It appropriately compensates for the sparse schema without being overly verbose.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0% (the tool_name property lacks a description field). The description compensates minimally with 'Name of the tool to check,' which provides basic semantics but lacks format details, examples, or validation rules that would fully address the schema gap.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Get[s] quality metrics for a specific MCP tool' with a specific verb and resource. It distinguishes itself from siblings like get_best_tools and get_trending_tools by emphasizing this retrieves metrics for a 'specific' tool rather than listing multiple tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description implies usage by specifying 'specific MCP tool' (suggesting use when analyzing one tool rather than browsing), it lacks explicit guidance on when to choose this over get_best_tools or report_tool_result. No prerequisites or exclusion criteria are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. It effectively explains the side effect ('build a quality database') but lacks details on mutation behavior, idempotency, whether reports can be updated, or what confirmation/response occurs after submission.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Structure is optimal: front-loaded summary ('Report the result...'), followed by value proposition ('Helps build...'), then detailed Args block. Every sentence earns its place with no redundancy or filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 7 parameters with zero schema descriptions, the description successfully documents all inputs. Minor gap: without output schema or description of return values/confirmation, the agent doesn't know what to expect after submission, though this is less critical for a reporting tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 0% schema description coverage, the description compensates perfectly via the 'Args:' block which documents all 7 parameters (tool_name, success, quality_score, task_description, server_name, response_time_ms, error_message) with clear semantics and examples (e.g., '10 = perfect' for quality_score).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Report[s] the result of using an MCP tool' with the specific purpose of building 'a quality database so agents can discover which tools work best.' It distinguishes sharply from sibling retrieval tools (get_best_tools, get_tool_quality, get_trending_tools) by being the only submission/feedback tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While it doesn't explicitly state 'when-not' rules, the description establishes clear context through the database-building explanation, implying this should be used after tool execution to contribute quality data. The verb 'Report' clearly contrasts with siblings' 'get' operations, making the usage context obvious without explicit enumeration of alternatives.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

agent-feedback-mcp-server MCP server

Copy to your README.md:

Score Badge

agent-feedback-mcp-server MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AiAgentKarl/agent-feedback-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server