Skip to main content
Glama

Server Quality Checklist

50%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.6.1

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 3 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full behavioral disclosure burden. It successfully states 'read-only, safe to call multiple times,' output format ('markdown'), truncation behavior ('truncated with a note'), and cost implications ('Larger values use more tokens'). Missing only error handling details (e.g., 404 behavior) for a perfect score.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with the core purpose, followed by safety notes, usage guidelines, Args section, and return value details. While comprehensive, the final sentence suggesting post-call actions ('After reading the README, you can suggest...') slightly exceeds strict tool description scope, though it provides workflow context.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple read operation with 3 parameters and an output schema, the description is complete. It explains the input parameters, output behavior (markdown, truncation), workflow relationship to siblings, and even post-call URL construction, leaving no significant gaps for agent operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to compensate fully. It provides an 'Args' section documenting all three parameters: owner/repo include examples ('Alamofire'), and max_length explains the default (4000), special case (0 for full), and cost implications—substantially exceeding baseline requirements.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with 'Fetch the README of a Swift package from GitHub,' providing a specific verb (Fetch), resource (README), and scope (Swift package from GitHub). It distinguishes itself from sibling search_swift_packages by explicitly stating 'Use this after search_swift_packages,' clarifying this is for detail retrieval, not discovery.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when to use the tool: 'Use this after search_swift_packages to get details about a specific package.' It also clarifies the parameter workflow: 'The owner and repo values come from search results,' directly referencing the sibling tool's output and establishing a clear sequential relationship between the tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden. It successfully documents return structure ('dict with keys platforms and product_types') and behavioral classification ('DISCOVERY tool'). Minor gap: no mention of whether values are cached/static or fetched live, or any rate limiting.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste: sentence 1 states purpose, sentence 2 provides usage guideline labeled 'DISCOVERY', sentence 3 documents return structure. Perfectly front-loaded and sized for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite having an output schema (per context signals), the description comprehensively documents return values and structure. Establishes clear relationship to sibling search_swift_packages. Complete for a zero-parameter discovery utility.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema contains zero parameters (empty object). According to scoring rules, 0 params baseline is 4. The description correctly implies no inputs are needed by omitting any parameter discussion, which is appropriate for this tool type.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Return', 'DISCOVERY tool') and clearly links to sibling tool search_swift_packages by name. It precisely scopes the resource as 'valid values for the constrained parameters', distinguishing it from the actual search execution performed by its siblings.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when to use ('call this first if you are unsure what values are accepted') and establishes clear workflow precedence relative to search_swift_packages. Provides specific parameter names (platforms, product_type) to trigger usage recognition.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior5/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations are absent, so the description carries full disclosure burden and succeeds excellently. It explicitly states 'This is a QUERY tool — read-only, safe to call multiple times', disclosing idempotency and safety. It also reveals pagination behavior ('Check has_more in the response') and parameter constraints ('At least one parameter must be provided').

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is necessarily lengthy given 13 undocumented parameters, but remains well-structured with clear sectioning (opening statement, behavioral note, constraint, Args list, workflow guidance). Every sentence earns its place—the examples and exclusion syntax ('!') are critical for usage. Minor deduction only because the verbosity is forced by poor schema coverage rather than perfect conciseness.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 13 optional parameters, zero schema descriptions, no annotations, but presence of an output schema, the description achieves completeness. It documents all parameters, explains response handling ('Check has_more'), states constraints, provides workflow integration with siblings, and covers behavioral traits—leaving no significant gaps for an agent to invoke this incorrectly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With schema description coverage at 0% per context signals, the description comprehensively compensates by documenting all 13 parameters in the Args section. It provides semantic meaning (e.g., 'Filter by repository owner'), syntax details ('Prefix with "!" to exclude'), format examples ('ISO8601 date (YYYY-MM-DD)'), and valid value lists ('ios, macos...') that are completely absent from the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Search') and resource ('Swift Package Index for packages'), clearly stating what the tool does. It distinguishes itself from sibling tools by explicitly mentioning both 'list_search_filters()' and 'get_package_readme()' as related tools to call before and after using this one.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance: when to use 'list_search_filters()' first ('If you are unsure what values are valid'), and what to do after results ('use get_package_readme...to read the README'). Also states critical constraints: 'At least one parameter must be provided' and 'Parameters are combined with AND logic', which are essential for correct invocation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

spm-search MCP server

Copy to your README.md:

Score Badge

spm-search MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/detailobsessed/spm-search'

If you have feedback or need assistance with the MCP directory API, please join our Discord server