Skip to main content
Glama
Sleywill

SnapAPI MCP Server

Server Quality Checklist

75%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation3/5

    Three tools—extract, scrape, and analyze—overlap significantly as they all fetch content from URLs. While descriptions differentiate them (Readability vs. browser rendering vs. AI analysis), an agent may struggle to select correctly without knowing whether a page relies on JavaScript or requires AI summarization upfront.

    Naming Consistency3/5

    Mixed conventions exist: simple verbs (analyze, extract, ping, scrape), verb_noun snake_case (get_usage, list_devices), and nouns acting as commands (pdf, video, screenshot). The lack of a unified pattern (e.g., generate_pdf vs. pdf, record_video vs. video) creates minor inconsistency.

    Tool Count5/5

    Nine tools is well-scoped for a web capture and extraction service. Each tool serves a distinct function (content extraction, visual capture, account utilities, device management) without bloat or redundancy.

    Completeness4/5

    Covers the core lifecycle for web capture: content extraction (multiple methods), screenshot, PDF generation, video recording, and account monitoring. Minor gaps may exist for retrieving historical captures or checking async job status, but the essential surface is complete.

  • Average 3.9/5 across 8 of 9 tools scored. Lowest: 3.3/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v3.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 9 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It lists capabilities (page sizes, margins, scaling) hinting at the rendering engine complexity, but omits critical behavioral details: viewport emulation, dynamic content handling (delay/waitUntil parameters), error cases for invalid URLs, or output binary handling.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two compact sentences with zero waste. Front-loaded with the core action ('Generate a PDF'), followed by a feature summary. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Covers primary PDF layout concerns but ignores the web-rendering nuances critical to this tool's function: viewport dimensions, network idle waiting, and delay mechanisms are absent despite being significant parameters for successful conversion.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing baseline 3. The description maps features to the pdfOptions object ('page sizes', 'margins', 'landscape') and input modes, but does not add syntax clarification beyond the schema's existing documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb ('Generate') and resource ('PDF') with explicit input sources ('URL or HTML content'). However, it does not differentiate from sibling 'screenshot' or 'scrape' tools, leaving ambiguity about when to prefer PDF generation over image capture.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Lacks explicit guidance on when to use this tool versus siblings like 'screenshot' or 'scrape'. The schema describes the url/html mutual exclusivity ('At least one...must be provided'), but the description gives no contextual 'when-to-use' advice.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, the description carries full disclosure burden. It successfully notes the browser-based execution and JavaScript support, but omits critical behavioral details: it doesn't explain the multi-page crawling capability (1-10 pages), anti-bot implications of premiumProxy, or performance characteristics like typical latency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely efficient two-sentence structure. First sentence delivers the complete core value proposition (action + mechanism + outputs); second sentence adds the JavaScript capability differentiator. Zero redundancy, front-loaded with essential information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (browser automation, pagination, proxy support) and lack of output schema, the description is minimally adequate. It covers the basic scraping contract but fails to contextualize advanced features like multi-page crawling or bot detection bypass, which are significant capabilities implied by the parameter schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description mentions output formats (aligning with the 'type' enum) but adds no semantic context for complex parameters like 'pages' (pagination behavior), 'blockResources' (performance impact), or 'premiumProxy' (use case) beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the core action (scrape URL), mechanism (real browser), and output formats (Markdown text, raw HTML, links). It implies distinction from simple HTTP fetch tools via 'real browser' and 'JavaScript-rendered pages,' though it doesn't explicitly differentiate from siblings like 'extract' or 'screenshot.'

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implicit guidance by noting it 'Works on JavaScript-rendered pages,' signaling use for dynamic content. However, it lacks explicit when-to-use guidance regarding the pagination feature ('pages' parameter) or when to prefer 'text' vs 'html' vs 'links' outputs, and doesn't mention alternatives.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and discloses specific behavioral capabilities (ad/cookie-banner blocking, CSS/JS injection, device emulation, dark mode). However, it omits operational context like error handling, timeout behavior, or whether the tool makes external network requests.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences efficiently structured: the first establishes core function and I/O, the second enumerates key capabilities. Every phrase earns its place; 'and more' is acceptable given the 18-parameter complexity without being overly verbose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 18 parameters, no annotations, and no output schema, the description adequately covers the essentials (input sources, output type, key features) but lacks detail on error scenarios, return value structure, or pagination. The 100% schema coverage compensates for parameter details, but behavioral gaps remain.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing a baseline of 3. The description maps capabilities to parameters (e.g., 'full-page capture' to fullPage, 'device emulation' to device) but does not add semantic context beyond what the schema already provides regarding formats, ranges, or validation rules.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description explicitly states the core action ('Take a screenshot') and resources ('URL', 'HTML/Markdown') with clear output ('return the image'). It effectively distinguishes from siblings like 'pdf', 'video', and 'scrape' by specifying image generation rather than document extraction or video production.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description clarifies input alternatives (URL vs HTML vs Markdown) implying when to use each, but lacks explicit guidance on choosing this tool over siblings like 'scrape' (data extraction) or 'pdf' (document generation). No 'when-not-to-use' or prerequisite guidance is provided.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses return content ('requests used vs. limit, remaining quota, and monthly statistics') but omits auth requirements, rate limits, caching behavior, or whether this counts against quota itself.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: first establishes action and scope, second details specific return values. Front-loaded with the verb and appropriately sized.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a 0-parameter utility. Describes return values effectively (compensating for missing output schema), but lacks mention of authentication prerequisites or billing period calculation boundaries.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present; baseline 4 per scoring rules. Schema is empty object with no properties requiring semantic elaboration.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Check' + resource 'SnapAPI account usage, quota, and plan details' + scope 'current billing period'. Clearly distinguishes from siblings (analyze, extract, screenshot, etc.) which are content/media processing tools rather than account metadata.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit when-to-use guidance, prerequisites, or alternatives mentioned. While the purpose is distinct from siblings, the description provides no contextual guidance on when to query usage versus performing other operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, description carries full burden. It discloses what each preset contains (viewport, device scale factor, mobile flag) and categorizes the devices (phones, tablets, desktops), but omits safety traits, rate limits, or caching behavior typical of read operations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste: purpose statement, content explanation, and usage instruction. Information density is high with no redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a zero-parameter discovery tool without output schema, the description adequately explains the return concept (presets with IDs) and their relationship to consumer tools (screenshot, video). Covers necessary context for agent selection.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present; baseline score applies per rubric. The description appropriately requires no additional parameter explanation given the empty input schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb 'List' with specific resource 'device presets'. The scope is well-defined ('for screenshot and video emulation') and distinguishes from siblings like analyze or pdf by explicitly mentioning the screenshot/video domain.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states the output intent: 'Pass the device id to the screenshot or video tool.' This establishes the workflow (use before screenshot/video) and links to specific sibling tools, though it lacks explicit 'when not to use' guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses critical AI-model usage ('analyze it with an AI model', 'AI-generated insights') which explains the non-deterministic nature, but omits other behavioral traits like idempotency, latency implications, or cost/rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three tightly constructed sentences with zero waste. Front-loaded with the core action (extract+analyze), followed by return values, and ending with sibling differentiation. Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no output schema, the description adequately covers return values ('AI-generated insights, summaries...'). Addresses the tool's complexity (AI processing) but could strengthen with mention of error conditions or latency expectations typical of LLM calls.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage, establishing baseline 3. The description adds high-level context that the prompt drives 'custom analysis', but does not augment specific parameter semantics (formats, enum usage patterns) beyond what the schema already documents.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Extract', 'analyze') with clear resource ('URL') and distinguishes from siblings like 'extract' and 'scrape' by explicitly stating 'Combines web extraction with LLM analysis in one call', clarifying the AI-powered differentiation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies when to use versus alternatives by emphasizing the combined extraction+analysis workflow ('in one call'), but lacks explicit guidance on when NOT to use it (e.g., 'use extract for raw content without AI processing').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. Adds valuable context that JavaScript runs 'before and during' recording and that return is a URL. Missing: URL expiration/temporariness, headless vs visible browser, error handling for invalid JavaScript, or resource limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences, zero fluff. Front-loaded with core action (Record) and format (WebM). Second sentence covers optional complexity (JS scenario) and return value. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for an 11-parameter video tool with 100% schema coverage. Description compensates for missing output schema by stating return format (URL). Could strengthen with URL persistence details or error behavior, but functional coverage is complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description Adds significant value for complex 'scenario' parameter: elaborates timing ('before and during' vs schema's 'during'), and expands examples ('form fills' not in schema). Justifies elevation above baseline.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: 'Record a browser session as a video (WebM)' provides exact verb, resource, and format. Clearly distinguishes from sibling 'screenshot' (static image) and 'pdf' (document capture) through the motion/recording aspect.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies use case through 'Optionally runs a JavaScript scenario' (dynamic interactions), but lacks explicit when-to-use vs alternatives (e.g., 'use screenshot for static pages, video for motion') or prerequisites. No mention of when NOT to use.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable implementation context by noting the use of 'Mozilla Readability' and 'HTML noise' removal, but lacks disclosure on error handling, rate limiting, authentication requirements, or timeout behavior expected from a web extraction service.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences with zero redundancy. It front-loads the core action ('Extract clean, structured content') and immediately follows with outputs and optimization purpose. Every word contributes to understanding.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 10 parameters and no output schema, the description adequately covers return value types (Markdown, article data, metadata, etc.) and the custom structured fields capability. It appropriately delegates parameter details to the comprehensive schema while providing high-level use case context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the input schema has 100% description coverage, the description adds crucial semantic context by framing the tool for 'LLM feeding,' which helps agents understand the intent behind parameters like cleanOutput, blockAds, maxLength, and the custom fields object. It explains the 'why' behind the extraction modes.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool extracts 'clean, structured content from a URL' and specifically lists output formats (Markdown, article data via Mozilla Readability, OG metadata, etc.). The phrase 'without HTML noise' effectively distinguishes it from the sibling 'scrape' tool, while 'feeding web content to LLMs' differentiates it from 'screenshot'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear usage context by specifying it is 'Optimized for feeding web content to LLMs,' indicating when to use this tool. However, it does not explicitly state when NOT to use it or name specific sibling alternatives (e.g., 'use screenshot for visual captures').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. Successfully mentions return values ('Returns the API status and current server time') and validation purpose. Could enhance by explicitly stating idempotency or safety for repeated calls, but covers core behavioral traits well for a simple health check.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three tightly constructed sentences: purpose, return values, and usage guidance. Front-loaded with action verb 'Check'. Zero redundancy or waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Complete for a zero-parameter health check tool. Addresses purpose, returns, and usage context without requiring output schema elaboration. Sufficient complexity coverage given simple schema and lack of annotations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present, establishing baseline of 4. Description appropriately makes no parameter claims since schema confirms no inputs required.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Check' with clear resources (SnapAPI service, API key) and scope (reachability, validity). Distinguishes from operational siblings (analyze, extract, etc.) by positioning as a configuration verification tool rather than a data processing operation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicit when-to-use guidance: 'Use this to verify your configuration before making other calls.' Provides clear temporal context for invocation. Does not explicitly name alternative tools or 'when-not-to-use' exclusions, hence not a 5.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

snapapi-mcp MCP server

Copy to your README.md:

Score Badge

snapapi-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Sleywill/snapapi-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server