Skip to main content
Glama

Server Quality Checklist

100%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    With only one tool available, there is no possibility of overlap or confusion between tools; the single purpose is clearly distinct.

    Naming Consistency5/5

    The single tool follows a clear snake_case convention with a descriptive verb-noun pattern (sweeppea_connect).

    Tool Count1/5

    The server claims to support sweepstakes management (referencing 66 tools in the description) but only exposes a single connection utility, representing an extreme mismatch for the domain.

    Completeness1/5

    For legally compliant sweepstakes management, providing only a connection details tool without any operations for creating sweepstakes, managing entries, or conducting drawings is severely incomplete.

  • Average 4.2/5 across 1 of 1 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.1

  • Tools from this server were used 2 times in the last 30 days.

  • This repository includes a glama.json configuration file.

  • This server provides 1 tool. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the authentication requirement (critical behavioral context), but omits other behavioral traits such as whether the connection is cached, what happens if credentials are invalid, or the specific format/structure of the returned connection details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of exactly two high-value sentences: the first establishes purpose and domain context, while the second states mandatory prerequisites. There is no redundant or filler text; every word serves a specific function for agent decision-making.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (zero parameters, no output schema), the description adequately covers the essential information needed for invocation: what it does, what it requires, and what it returns (broadly). A minor gap remains in not specifying the structure of the 'connection details' return value, though this is partially mitigated by the tool's likely role as a preliminary setup call.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters, which per the rubric establishes a baseline score of 4. The description correctly does not invent parameters, and the absence of parameter documentation does not hinder tool invocation since no arguments are required.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Returns') and resource ('connection details'), immediately clarifying the tool's function. It also provides domain context explaining that Sweeppea involves '66 tools for legally compliant sweepstakes management,' which helps the agent understand the server's purpose even without sibling tools to differentiate from.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description effectively states prerequisites ('Requires a Sweeppea subscription and API key'), preventing invocations by unauthenticated agents. While it lacks explicit 'when-to-use' guidance (e.g., 'call this first'), the tool name 'sweeppea_connect' combined with the credential requirements sufficiently implies this is an initialization step.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

sweeppea-mcp-info MCP server

Copy to your README.md:

Score Badge

sweeppea-mcp-info MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Sweeppea-Development-Lab/sweeppea-mcp-info'

If you have feedback or need assistance with the MCP directory API, please join our Discord server