Skip to main content
Glama

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a clearly distinct purpose with no overlap: availability checking, safety context, campsite finding, EV charging, weekend planning, similarity matching, park details, weather forecasting, and park searching. The descriptions reinforce unique functions, making tool selection unambiguous.

    Naming Consistency5/5

    All tools follow a consistent verb_noun pattern (e.g., check_availability, find_campsites, get_park_details) with no deviations in style or convention. This predictability aids agent understanding and navigation of the toolset.

    Tool Count5/5

    With 9 tools, the set is well-scoped for trip planning and campsite booking, covering key aspects like availability, safety, amenities, weather, and search. Each tool earns its place without redundancy or bloat, fitting typical server scope expectations.

    Completeness5/5

    The toolset provides comprehensive coverage for campsite and park planning, including search, details, availability, safety, weather, and specialized features like EV charging and similarity matching. No obvious gaps exist; agents can handle end-to-end planning workflows effectively.

  • Average 3.4/5 across 9 of 9 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 9 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already establish read-only and non-destructive safety properties. The description adds valuable behavioral context by listing what constitutes the 'park profile' (facilities, trails, scout-relevant details), helping agents know what data to expect, though it omits error behaviors or rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single efficient sentence front-loaded with the action verb 'Return'. Every clause adds specific content categories, though the final prepositional phrase 'for a specific park' is somewhat vague given the parameter complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of output schema, the description adequately enumerates return content (facilities, rules, etc.). However, with simple input parameters completely undescribed in the schema, the description should have documented the park identification options, leaving a meaningful gap.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description must compensate for undocumented parameters. While it mentions 'for a specific park' implying identification is needed, it fails to explain the two alternative identifier parameters (park_id vs park_name) or the oneOf constraint that only one is required.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool returns a 'park profile' with specific content types (facilities, rules, trails, amenities, scout-relevant planning details). The phrase 'for a specific park' implicitly distinguishes it from discovery-oriented siblings like search_parks or find_similar, though it lacks explicit contrast.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to select this tool versus alternatives like search_parks (for discovery) or get_weather/check_availability (for specific facts). It also fails to mention the prerequisite of needing to identify a specific park first.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations declare readOnlyHint and non-destructive behavior, the description adds minimal behavioral context beyond listing filters. It does not explain what 'supported parks' means (scope of dataset), does not clarify pagination behavior despite limit/offset parameters being present, and provides no indication of result format or default sorting.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, compact sentence that efficiently lists the available search dimensions without redundancy. However, given the tool's complexity (8 optional parameters) and lack of output schema, this extreme brevity leaves critical information unstated.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    The description covers the primary search intent and filter categories but leaves significant gaps for an 8-parameter tool with no output schema. It omits the optional nature of parameters, pagination instructions, and return value structure. It is minimally viable but incomplete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With only 25% schema description coverage (only 'near' and 'state' have descriptions), the description compensates well by semantically mapping conceptual filters to parameter names: 'origin'→near, 'drive time'→max_drive_minutes, 'camping types'→has_spot_types, and 'group-camping support'→has_group_camping. It misses documentation for limit/offset pagination parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb (Search), resource (parks), and specific filter dimensions available (origin, drive time, activities, camping types, group-camping). However, it does not differentiate from sibling tools like 'find_campsites' or 'get_park_details', leaving ambiguity about which tool to use for camping-specific queries versus general park discovery.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives like 'find_campsites' or 'get_park_details'. It also fails to mention that all parameters are optional or what behavior occurs when no filters are applied.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations confirm read-only safety (readOnlyHint: true). The description adds context about the date-range filtering behavior, but doesn't disclose the similarity matching methodology, pagination behavior (limit/offset), or what constitutes a campsite object in the response.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence that is front-loaded with the primary verb and resource. Efficient structure, though arguably too terse given the complexity of the conditional schema (oneOf input requirements, if-then date constraints).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Acknowledges the complex input alternatives (description vs. site reference) and the availability date constraint, but omits the conditional schema logic (required dates when available_only=true) and provides no output context given the lack of output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is low (27%), requiring description compensation. While it conceptually covers the reference inputs (description/site) and date parameters, it fails to explain filtering parameters (near, max_drive_minutes) or pagination controls (limit, offset), leaving significant semantic gaps.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states the core task (finding similar campsites) and identifies the two input modes (reference site vs. description). However, it doesn't define what 'similar' means algorithmically or explicitly distinguish from the sibling 'find_campsites' tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage through 'optionally filtering' language, suggesting when to use the availability constraint. However, it lacks explicit guidance on when to choose this over 'find_campsites' or 'search_parks', and doesn't mention the conditional requirement that dates are mandatory when filtering for availability.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds valuable behavioral context by specifying the dual data types returned (real-time 'forecast conditions' and historical 'monthly normals'), which helps the agent understand the tool's data scope beyond what annotations provide.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single 12-word sentence is appropriately front-loaded with the action verb and contains zero filler. However, it packs multiple concepts (forecast, normals, trip planning) that could benefit from slight structural separation for clarity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the parameter complexity (4 params with oneOf logic) and lack of output schema, the description minimally suffices by stating the return value types. However, it omits critical parameter relationship logic and temporal filtering behavior (date vs. month) necessary for correct invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 50% schema description coverage (date and month are described; park_id and park_name lack descriptions), the description fails to compensate. It only vaguely references 'at a specific park' without clarifying the mutually exclusive park_id/park_name requirement (via oneOf) or explaining whether date and month are alternative filters for different data types.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description specifies the action ('Return'), resource ('forecast conditions and monthly normals'), and scope ('at a specific park'), clearly distinguishing it from sibling tools like get_park_details or find_campsites. However, it uses the generic verb 'Return' rather than a stronger retrieval verb, and doesn't explicitly differentiate from potential weather-related tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The phrase 'for trip planning' provides vague contextual usage but offers no explicit guidance on when to use this tool versus alternatives (e.g., get_park_details for general info), no prerequisites mentioned, and no exclusions or error conditions described.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint=true and destructiveHint=false. The description adds valuable context about what gets returned ('detailed site-level', 'capacity context') that annotations don't cover, but lacks details on error cases, rate limits, or behavior when no spots are available.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single efficient sentence with zero waste. Front-loaded with action verb 'Check'. Each phrase ('detailed site-level', 'spot types', 'capacity context') adds distinct value without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Decent coverage for a 6-parameter tool: establishes domain (park availability) and hints at filtering capabilities. However, fails to document the park identification XOR requirement and omits the group_size parameter entirely, which are significant gaps given the complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is low (33%). The description implicitly covers park (id/name), dates, and spot_type by mentioning them, but completely omits group_size and fails to explain the critical oneOf constraint requiring either park_id OR park_name. Partial compensation for schema gaps.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'Check' with resource 'site-level availability' and scope 'specific park and date range'. It distinguishes from search_parks by requiring a specific park and adds value by mentioning 'spot types' and 'capacity context'. However, it doesn't explicitly differentiate from sibling find_campsites.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage by stating 'specific park' (suggesting prior park identification needed vs search_parks), but provides no explicit guidance on when to use versus find_campsites or find_open_weekends. No mention of prerequisites or alternatives.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations confirm readOnly/destructive hints, so description's burden is lower. It adds valuable behavioral scope distinction ('on-site' vs 'nearby public') not present in annotations. However, it omits details about the return format, pagination, or whether availability is real-time vs static.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single 18-word sentence with zero waste. Front-loaded with action and resource, with filter capabilities appended efficiently.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a search tool with 8 parameters, but gaps remain given the oneOf constraint complexity and temporal filtering parameters (start_date/end_date) that are unmentioned. No output schema exists, but description does not need to explain return values.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is low (38%). Description compensates partially by naming 'connector, power, and network filters', implicitly documenting 3 parameters. However, it fails to explain the critical oneOf requirement (park_id OR park_name required), the purpose of date parameters (likely availability windows), or radius_miles (search radius).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb 'Return' and resource 'charging options/stations' with specific scope 'for a park'. Distinguishes from siblings (find_campsites, get_weather, etc.) via the EV charging domain, though could explicitly contrast with general 'search_parks' tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage context ('for a park') but lacks explicit when-to-use guidance or alternatives. Does not clarify when to use park_id vs park_name or how this relates to check_availability for the same park.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint=true, so the description carries reduced burden for safety disclosure. It adds valuable behavioral context: 'upcoming' implies future-date filtering, 'viable' suggests availability validation logic, and 'for a group' implies aggregation or multi-site coordination. However, it omits what the return value contains (dates? site lists?), pagination limits (num_weekends max 12), or cache/staleness behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence structure is exemplary: front-loaded action ('Find upcoming weekend date options...'), core value proposition ('viable campsite availability for a group'), and trailing options ('optionally filtered by...'). Zero redundancy, every word earns its place, appropriate length for the complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    With 8 parameters, 1 required, no output schema, and low schema coverage, the tool needs richer documentation. The description omits: (1) what the tool returns structurally, (2) that group_size is the sole required field, and (3) the relationship between location parameters (park_id vs park_name vs state vs near). Annotations cover the read-only safety profile, but the functional contract remains under-described.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is low (38%), so the description must compensate. It maps conceptual filters (park, state, month, drive time) to parameters, implicitly covering park_id/park_name, state, month, and max_drive_minutes/near. However, it fails to mention num_weekends (return quantity), doesn't clarify the difference between park_id and park_name, and doesn't explain group_size semantics beyond implied 'group' reference. Partial compensation achieved.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action (Find), resource (weekend date options with campsite availability), and scope (for a group). It effectively distinguishes this from sibling tools like 'find_campsites' by specifying 'upcoming weekend date options' and 'viable campsite availability,' implying a search for time slots rather than just locations. However, it doesn't explicitly name siblings or contrast use cases.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The phrase 'for a group' implies the target use case (group camping coordination), and 'optionally filtered by' suggests flexible query patterns. However, it lacks explicit guidance on when to use this versus 'find_campsites' or 'check_availability,' or prerequisites like needing to specify at least one location constraint beyond just group_size.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations confirm read-only, non-destructive behavior. The description adds valuable behavioral context beyond annotations by detailing the specific safety data categories returned (alerts, hunting overlap, emergency support), giving the agent clear expectations about the content and utility of the response.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence front-loaded with the action verb 'Return'. Every clause earns its place: 'park safety context' establishes the domain, the three enumerated items specify the content, and 'trip window' signals the temporal parameters. No redundancy or boilerplate.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    With no output schema, the description effectively compensates by detailing the specific safety information returned. However, it fails to explain the critical input constraint that park_id and park_name are mutually exclusive (oneOf), which is necessary for correct tool invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 50% (start_date and end_date documented; park_id and park_name lack descriptions). The description references 'trip window,' which maps to the date parameters, but provides no guidance on the mutually exclusive park_id versus park_name parameters or the oneOf constraint required for successful invocation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool returns safety-specific information (alerts, hunting overlap, emergency support) for a trip window, using a specific verb+resource pattern. While it implicitly distinguishes from siblings like get_weather or get_park_details through its specific content focus, it does not explicitly contrast with them.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage through the phrase 'for a given trip window,' suggesting temporal planning use cases. However, it lacks explicit guidance on when to use this versus get_park_details or check_availability, and does not mention prerequisites like requiring valid park identification.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations indicate read-only/non-destructive behavior; the description adds valuable context about the ranking algorithm (group fit, amenities, drive time, price) and scope limitations ('across supported parks'), though it doesn't mention pagination behavior or empty result handling.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence efficiently packs the verb, resource, key parameters (date range), ranking dimensions, and scope without filler, placing the most critical information (finding ranked campsites) upfront.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While adequate for a search tool, the description lacks output format details and doesn't fully address the complexity of 14 parameters with low schema coverage; it mentions ranking criteria but doesn't explain how results are structured or truncated.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With only 36% schema description coverage, the description partially compensates by semantically mapping 'date range', 'group fit', 'amenities', 'drive time', and 'price' to corresponding parameters, but leaves many boolean filters (ada_accessible, campfire, electric, pets_allowed) and pagination params (limit, offset) undocumented.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool finds campsite options with specific ranking criteria (group fit, amenities, drive time, price), distinguishing it from sibling search tools like search_parks or find_ev_charging through its specific resource focus and ranking behavior.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like check_availability (for specific site availability) or search_parks (for general park information), nor does it mention prerequisites or constraints beyond the implied date range.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

tentahead-mcp MCP server

Copy to your README.md:

Score Badge

tentahead-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/enharper/tentahead-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server