Skip to main content
Glama

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a distinct, non-overlapping purpose. The four 'list_' tools target different resource types (game videos, meme hooks, music, templates), while lifecycle tools (create_short, get_status, get_shorts, delete_request) clearly separate creation, monitoring, retrieval, and cleanup phases without ambiguity.

    Naming Consistency5/5

    Perfect consistency with snake_case verb_noun pattern throughout (create_short, delete_request, get_shorts, get_status, list_requests, etc.). All list operations use the 'list_' prefix, retrieval uses 'get_', and mutation uses 'create_' or 'delete_' predictably.

    Tool Count5/5

    Nine tools is an ideal scope for this domain—covering the full video generation lifecycle (create, status check, retrieval, deletion, listing) plus four discovery tools for auxiliary resources (games, memes, music, templates). No bloat, no gaps in the surface area.

    Completeness4/5

    Covers the essential CRUD operations and resource discovery well. Minor gap: lacks a 'get_request' singular operation to fetch metadata for a specific request without listing all; agents must use list_requests with filtering or combine get_status/get_shorts to reconstruct request state. No cancel/pause functionality mentioned.

  • Average 4/5 across 9 of 9 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.0.4

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 9 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It successfully discloses pagination and sorting capabilities, but lacks explicit safety disclosure (read-only nature), error behavior, or what the return payload contains. 'List' implies read-only but doesn't confirm no side effects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single, efficient sentence with verb-fronted structure ('List all...'). Every clause serves a purpose. Minor quibble: 'all' combined with pagination parameters could confuse, but generally very tight.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a 5-parameter list operation with good schema coverage, but gaps remain due to missing annotations and output schema. No description of return structure, rate limits, or empty result behavior. Minimum viable for this complexity level.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, so baseline applies. The description adds value by functionally grouping parameters into 'status filtering,' 'pagination,' and 'sorting,' which helps the agent understand the parameter purposes beyond the individual schema descriptions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: 'List' (verb) + 'short creation requests' (resource) clearly distinguishes from siblings like 'get_shorts' (returns completed shorts) and 'list_game_videos'/'list_music' (different resource types). The scope is unambiguous.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit when-to-use or when-not-to-use guidance. While the resource type 'short creation requests' implies this is for job monitoring/tracking versus creating content (create_short) or retrieving finished products (get_shorts), it doesn't state this explicitly or mention prerequisites like auth requirements.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, but description discloses critical behavioral traits: async processing (returns ID instantly, 5-30min processing time) and cost model (1 credit). Could mention idempotency or failure modes.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences with zero waste: purpose, return value, processing time, and cost. Front-loaded with the core action. Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a complex 20-parameter tool with 100% schema coverage, the description appropriately focuses on async return pattern and cost rather than repeating parameter docs. Lacks mention of status polling requirement (get_status) preventing a 5.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage, establishing baseline 3. Description mentions 'YouTube video or uploaded file' which loosely maps to url/fileUrl parameters, but adds minimal semantic value beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Create' + resource 'AI-generated short-form video clips' + source types 'YouTube video or uploaded file'. Clearly distinguishes from sibling list/get/delete tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies async workflow via 'Returns a request ID instantly' and processing time, but fails to mention prerequisite sibling tools (list_templates, list_meme_hooks, etc.) required for parameters like templateId and memeHookName.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses specific return fields (video URLs, AI-generated titles, viral scores, durations) and critical behavioral trait: results are 'sorted by viral score'. Missing error behavior for incomplete requests or invalid IDs.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient sentences with zero waste. Front-loaded with action ('Retrieve'), followed by scope ('for a completed request'), then return value specification. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    No output schema exists, but description compensates by detailing the complete response structure (URLs, titles, scores, durations) and sorting behavior. Lacks only error-state documentation for a fully complete picture.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage (requestId documented as 24-char hex), establishing baseline 3. Description references 'completed request' implying the parameter context but does not add syntax, format, or usage details beyond what the schema provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Retrieve' + resource 'short clips' + scope 'for a completed request'. Clearly distinguishes from sibling create_short (creates vs retrieves) and get_status (gets content vs status).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage timing with 'completed request', suggesting it should not be used on pending requests. However, lacks explicit guidance to use sibling get_status first to check completion, or explicit when-not-to-use warnings.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. It adds valuable behavioral context about the split-screen layout (content top, game bottom) and hints at output structure (gameVideoName field), but omits pagination behavior details, rate limits, or response format specifics expected for a read operation with no safety annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. First sentence front-loads purpose and specific domain context; second sentence provides actionable usage guidance. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple 2-parameter list tool with no output schema, the description adequately compensates by revealing the key output field (gameVideoName) and the specific domain (split-screen overlays). Slight gap on pagination return structure (total counts, hasMore flags), but sufficient for tool selection.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100% (page number and items per page are clearly documented), so the baseline expectation is met. The description does not add parameter-specific semantics, but none are needed given the comprehensive schema documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'List' with clear resource 'gameplay videos' and distinguishes from siblings by specifying the unique use case 'for split-screen overlays (content top, game bottom)', clearly differentiating it from list_music, list_templates, and other list tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit guidance on when/how to use the output ('Use the exact gameVideoName when creating shorts'), referencing the sibling tool create_short. However, it lacks explicit 'when not to use' guidance or contrast with alternative tools like list_templates.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. It adds domain context explaining what meme hooks are (duration, purpose), but omits safety profile (read-only vs mutation), rate limits, or pagination behavior details that annotations would normally cover.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: first defines resource and its characteristics, second provides usage workflow guidance. Every word earns its place; appropriately front-loaded.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a listing tool with no output schema, description compensates by identifying the key output field to capture (memeHookName) for use with create_short. Covers pagination via schema. Good completeness for a simple list operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage with clear pagination parameter descriptions ('Page number', 'Items per page'). Description adds no parameter-specific guidance, which is acceptable given schema completeness, meeting baseline expectation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description provides specific verb 'List' with resource 'meme hook clips', defines what they are (2-5 second attention grabbers prepended to shorts), and distinguishes from siblings like list_music or list_templates by specifying the unique clip type and its relationship to video creation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly connects to sibling workflow by stating to 'Use the exact memeHookName when creating shorts', establishing when/why to use this tool (to get valid names for create_short). Lacks explicit 'when not to use' or alternative guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It discloses output structure ('names and durations') which compensates for the missing output schema. However, it omits safety/disposition details (e.g., read-only nature) and pagination behavior despite having pagination parameters.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences, both earning their place: the first declares purpose and output, the second provides critical usage context for the downstream workflow. No redundancy or filler content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a low-complexity tool (2 optional params, no nested objects) without an output schema, the description appropriately compensates by specifying the return values ('names and durations'). It also connects the output to the 'create_short' workflow. Could mention pagination implications, but schema coverage mitigates this gap.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100% (page and limit are fully documented), establishing a baseline of 3. The description does not add semantic meaning beyond the schema (e.g., it doesn't explain pagination strategy or defaults), so it meets but does not exceed the baseline.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a specific verb ('List') + resource ('background music tracks') + return details ('names and durations'). It clearly distinguishes from sibling list tools (list_game_videos, list_meme_hooks, list_templates) by specifying 'background music'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The second sentence provides explicit usage guidance: 'Use the exact musicName when creating shorts.' This establishes the relationship to sibling tool 'create_short' and tells the agent when to invoke this tool (to get valid music names for short creation). Lacks explicit 'when not to use' guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It effectively discloses destructive scope (request + videos), financial impact (credits not refunded), and permanence (irreversible), covering critical safety traits for a mutation tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences, all high-value: action definition, financial warning, and permanence warning. Front-loaded with the operative verb and no redundant text.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter destructive tool without output schema, the description adequately covers operational scope and consequences. Minor gap in not describing success/failure return behavior, though this is less critical given the safety warnings provided.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage (requestId fully documented), establishing baseline 3. The description does not add parameter-specific semantics beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb ('Permanently delete') and resources ('short creation request and all generated videos'), clearly distinguishing it from sibling creation and listing tools like create_short or list_requests.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear safety context through explicit warnings ('Credits are NOT refunded', 'irreversible') that guide against frivolous use, though it does not explicitly name alternative tools for comparison.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses what data is returned ('IDs and names'), helping predict output structure. However, missing safety/permission info, pagination behavior, or rate limits that would be essential for a production tool with no annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. First sentence establishes purpose and return value; second provides usage context. Front-loaded structure with every sentence earning its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given zero parameters and no output schema, description adequately compensates by specifying return fields ('IDs and names'). Sufficient for low-complexity tool, though could mention pagination or total volume limits for complete production context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema contains zero parameters. Per rubric guidelines, zero-parameter tools receive baseline score of 4. Description appropriately does not invent parameter semantics where none exist.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description provides specific verb ('List'), resource ('caption style templates'), scope ('all available'), and return fields ('IDs and names'). Clearly distinguishes from sibling list tools (list_music, list_game_videos) by specifying 'caption style' resource.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Second sentence establishes workflow connection to sibling tool create_short ('Use the templateId when creating shorts'), implying when to invoke this tool. Lacks explicit 'when not to use' or comparison to alternative listing tools, but provides clear contextual purpose.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses return structure comprehensively: enumerates status values (queued/processing/completed/failed), mentions progress percentage and current step. Missing error behavior (e.g., invalid requestId) and read-only nature, but effectively compensates for absent output schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. First sentence defines action and scope; second sentence discloses return values. Front-loaded with essential information, no redundant phrases.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for tool complexity (single param, 100% schema coverage, simple behavior). Description adequately compensates for missing output schema by detailing return structure. Links to sibling create_short via 'short creation request' context, completing the operational picture.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with requestId fully documented (type, pattern, description linking to create_short). Description does not mention parameters, but baseline of 3 is appropriate when schema documentation is complete; no additional parameter semantics needed in description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb 'Check' with explicit resource 'processing status and progress of a short creation request'. Distinguishes from sibling get_shorts (which returns content) and list_requests (which lists multiple) by focusing on detailed progress polling of a single creation job.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage context by referencing 'short creation request' and status lifecycle (queued/processing/completed/failed), suggesting polling workflow after create_short. Lacks explicit 'when to use vs alternatives' statement (e.g., contrasting with list_requests), but context is clear from description and parameter schema reference to create_short.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

ssemble-mcp-server MCP server

Copy to your README.md:

Score Badge

ssemble-mcp-server MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ssembleinc/ssemble-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server