Skip to main content
Glama

Server Quality Checklist

75%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool serves a clearly distinct purpose: comparative analysis (compare_protocols), detailed retrieval (get_protocol_info), and ecosystem discovery (scan_opportunities). No functional overlap exists between them.

    Naming Consistency5/5

    All three tools follow a consistent verb_noun snake_case pattern (compare_protocols, get_protocol_info, scan_opportunities) with predictable action words appropriate to their functions.

    Tool Count3/5

    Three tools is borderline thin for the domain; while it covers basic read/compare/scan operations, the minimal surface leaves little room for additional analytical or filtering capabilities that intelligence workflows typically require.

    Completeness3/5

    Notable gap exists in protocol discovery: get_protocol_info and compare_protocols appear to require specific protocol identifiers, but no list_protocols or search_protocols tool is provided to discover available identifiers before using the other tools.

  • Average 2.7/5 across 3 of 3 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 3 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'scan' implies a read-only operation, the description fails to clarify what constitutes an 'opportunity,' the volume of results expected, performance characteristics, or whether results are real-time or cached.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with no redundancy or filler. However, it is under-specified for a tool with undocumented parameters and no annotations.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of annotations, output schema, and parameter descriptions, the description is insufficient. It omits expected return structure, parameter semantics, and behavioral constraints necessary for an agent to invoke the tool correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description must compensate by explaining the two parameters (days, min_score). It completely fails to do so, leaving critical semantic gaps: 'days' likely refers to a lookback window but is undefined, and 'min_score' references an unexplained scoring system.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The verb 'Scan' is specific and the domain 'agent payments ecosystem' provides context, but 'actionable opportunities' is vague marketing language that fails to specify the concrete resource type or entity being returned. It minimally distinguishes from siblings (compare_protocols, get_protocol_info) by implying a discovery operation rather than retrieval or comparison.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus the sibling tools compare_protocols or get_protocol_info. There are no stated prerequisites, conditions, or exclusion criteria.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get details' implies a read-only operation, the description fails to address idempotency, error handling (e.g., invalid protocol codes), or what 'details' encompasses.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single-sentence description is efficiently structured and front-loaded. However, its extreme brevity leaves critical gaps—particularly regarding the opaque protocol codes—preventing a perfect score.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (single enum parameter), the description meets minimum viability but remains incomplete. It omits the meaning of protocol identifiers and provides no indication of the return value structure despite the lack of output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0%, requiring the description to compensate. It maps the 'protocol' parameter to the concept of 'agent payment protocol,' adding necessary semantic context. However, it fails to explain the five enum values (ap2, acp, x402, mpp, ucp) or their meanings.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves details using the specific verb 'Get' and resource 'agent payment protocol.' The word 'specific' subtly distinguishes it from the sibling 'compare_protocols,' though it doesn't explicitly name the alternative.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus 'compare_protocols' or 'scan_opportunities.' It lacks prerequisites, exclusion criteria, or workflow context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but fails to state whether this is a read-only operation, what 'key dimensions' are evaluated, or what format the comparison output takes. 'Across key dimensions' implies a structured analysis but lacks specifics on behavior or side effects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single efficient sentence with no redundant words. However, given the absence of parameters, annotations, and output schema, the extreme brevity contributes to under-specification of behavioral traits and return values.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a tool with zero parameters and no output schema, the description successfully identifies the operation but fails to compensate for missing structured metadata. It omits critical context such as comparison dimensions, output format, and selection criteria for which protocols are compared.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters, establishing a baseline score of 4. The description does not need to compensate for parameter documentation in this case, though it could have clarified whether the protocols to compare are specified elsewhere or if it compares all available protocols.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly identifies the core action ('Compare') and resource ('agent payment protocols'), providing a specific verb-object pair. However, while 'across key dimensions' hints at the comparison methodology, it lacks explicit differentiation from sibling tools like get_protocol_info.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus the available alternatives (get_protocol_info, scan_opportunities). There are no explicit when/when-not conditions or references to prerequisites for the comparison operation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

intelligence-mcp MCP server

Copy to your README.md:

Score Badge

intelligence-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/goodmeta/intelligence-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server