Skip to main content
Glama

AI Site Scorer

Server Details

Check website AI-readiness: Schema.org, llms.txt, E-E-A-T, robots.txt. Works in Cursor & Claude.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct, non-overlapping purpose: analyze_url initiates analysis, enhance_report adds deeper insights, generate_fix_prompt creates developer instructions, get_report retrieves results, and list_recent_analyses shows history. The descriptions clearly differentiate their functions, eliminating ambiguity and misselection risks.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (e.g., analyze_url, enhance_report, generate_fix_prompt), using snake_case throughout. This predictable naming scheme makes the tool set easy to navigate and understand at a glance.

Tool Count5/5

With 5 tools, the server is well-scoped for its AI site scoring domain, covering the full lifecycle from analysis initiation to result retrieval and enhancement. Each tool earns its place without bloat, providing a focused and manageable interface.

Completeness5/5

The tool set offers complete coverage for the domain: analyze_url for initial analysis, get_report and list_recent_analyses for retrieval, enhance_report for deeper insights, and generate_fix_prompt for actionable fixes. There are no obvious gaps, enabling agents to handle end-to-end workflows seamlessly.

Available Tools

5 tools
analyze_urlAInspect

Submit a URL for AI-readiness analysis. Checks how well the site is structured for AI/LLM crawling (robots.txt, structured data, page speed, content quality, etc.). Returns scores and recommendations. Set wait_for_result=true to wait for completion (up to 60s). Pricing: Free for registered users (100/mo). Anonymous (x402): $0.001 USDC for analysis only; $0.006 USDC combined when enhance=true ($0.001 analyze + $0.005 enhance, charged as a single upfront payment so only one X-PAYMENT header is needed per request).

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to analyze (must include http:// or https://).
enhanceNoIf true, automatically run LLM deep analysis after basic analysis completes. Anonymous users: combined $0.006 USDC ($0.001 analyze + $0.005 enhance) charged as a single upfront payment. Default: false.
wait_for_resultNoIf true (default), wait for the analysis to complete and return the full report. If false, return the job_id immediately.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and excels at disclosing behavioral traits: it specifies timing constraints (up to 60s), detailed pricing tiers (free registered tier with 100/mo limit, anonymous x402 payments), payment mechanics (single upfront combined charge for enhance=true, X-PAYMENT header requirement), and the sequential workflow (basic analysis runs before LLM deep analysis when enhanced).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense and front-loaded with the core purpose, but the pricing sentence is overly long and complex, combining multiple pricing scenarios, currency amounts, and technical payment header details in a single block. While all information is relevant, better structural separation (e.g., breaking into timing, pricing, and behavior sentences) would improve readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description adequately explains the return values (scores/recommendations vs. job_id) and synchronous/asynchronous behaviors. However, it fails to mention the relationship to 'get_report' for retrieving results when 'wait_for_result=false', leaving a gap in the complete workflow documentation for a tool with async capabilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds significant value by specifying the maximum wait duration ('up to 60s') for 'wait_for_result' and providing granular pricing breakdowns for the 'enhance' parameter (clarifying the $0.001 + $0.005 cost structure and single-charge mechanism) that are not present in the schema property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool submits a URL for 'AI-readiness analysis' and lists specific checked elements (robots.txt, structured data, etc.). It distinguishes the basic analysis from the enhanced analysis (via the enhance parameter), implicitly differentiating from the sibling 'enhance_report' tool. However, it does not explicitly clarify when to use this tool versus 'get_report' for retrieving existing analyses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit parameter guidance for 'wait_for_result' (explaining the 60s wait time) and details the cost implications of using 'enhance=true' versus false. However, it lacks explicit tool-selection guidance regarding when to use the asynchronous mode (and subsequently 'get_report') versus waiting for immediate results, or when to use 'enhance_report' as a standalone alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

enhance_reportAInspect

Run LLM deep analysis on a completed report. Returns detailed AI analysis with priority issues and actionable recommendations. Pricing: Free with Pro subscription, or $0.005 USDC via x402 (anonymous or free-tier).

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesThe UUID job_id of a completed report.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and succeeds in disclosing the pricing model ($0.005 USDC via x402, free-tier options) and return format ('detailed AI analysis with priority issues'). It does not clarify whether this creates a persistent analysis record or returns ephemeral data, and omits side-effect warnings.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences deliver maximum information density: purpose (sentence 1), return value (sentence 2), and cost structure (sentence 3). No filler text; front-loaded with the action verb and logically sequenced.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool without output schema or annotations, the description adequately covers the return value structure and critical pricing constraints. It could be improved by clarifying if the analysis result is persisted (linked to the job_id) or transient, but provides sufficient context for invocation decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'job_id' parameter, the schema already fully documents the input. The description references 'completed report' which aligns with the schema's 'UUID job_id of a completed report,' but adds no additional semantic detail about parameter format or validation beyond the schema itself.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb phrase ('Run LLM deep analysis') and clear resource ('completed report'), immediately distinguishing it from sibling tools like analyze_url (which likely analyzes raw URLs) and get_report (retrieval). The scope is precisely defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies prerequisites by requiring a 'completed report' (suggesting it follows report generation) and provides critical cost guidance ('Pricing: Free with Pro...'), which informs usage decisions. However, it lacks explicit comparison to siblings like analyze_url or guidance on when to prefer this over get_report.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_fix_promptAInspect

Generate a Cursor/IDE-ready prompt to fix AI readiness issues found in a report. Returns a comprehensive, actionable developer prompt. Pricing: Free with Pro subscription, or $0.002 USDC via x402 (anonymous or free-tier).

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesThe UUID job_id of a completed report.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, description carries full burden. Discloses pricing model ($0.002 USDC or free with Pro) and output format ('comprehensive, actionable developer prompt'). Missing safety classification (likely read-only/generative) and idempotency details, but pricing transparency is a significant behavioral disclosure often omitted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose front-loaded, output behavior second, pricing third. Each sentence delivers distinct, non-redundant information. Appropriate length for single-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for low-complexity tool (1 simple parameter, no output schema). Covers purpose, input requirement, cost, and return value. Could improve by explicitly stating this requires a successfully completed report job_id, though this is implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and parameter is fully documented in schema ('The UUID job_id of a completed report'). Description reinforces context by mentioning 'found in a report' but doesn't add syntax, format details, or examples beyond what the schema already provides. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Generate' (verb) + 'Cursor/IDE-ready prompt' (resource/format) + 'fix AI readiness issues' (purpose). Clearly distinguishes from sibling tools like get_report or enhance_report by specifying it produces actionable developer prompts rather than retrieving or enhancing report data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies prerequisite context by referencing 'issues found in a report,' suggesting it requires a completed analysis job. However, lacks explicit guidance on when to choose this over enhance_report or analyze_url, and doesn't state prerequisites (e.g., 'use after get_report completes').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_reportAInspect

Fetch a previously completed analysis report by job ID. Returns scores, check details, and recommendations.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesThe UUID job_id returned by analyze_url.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It compensates partially by disclosing return contents ('scores, check details, and recommendations') since no output schema exists. However, it omits behavioral traits like idempotency, safety/read-only nature, error cases, or cache behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first states purpose/action, second states return value. Front-loaded with the operative verb 'Fetch' and no filler words or redundant explanations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter retrieval tool, the description is nearly complete. It adequately describes return values to compensate for missing output schema. Minor gap: without annotations, it should ideally declare the read-only/safe nature of the operation, though 'Fetch' implies this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single parameter, the schema already fully documents job_id including its origin (analyze_url). The description adds no additional parameter semantics beyond what's in the schema, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb (Fetch) with clear resource (analysis report) and scope (by job ID). It distinguishes from siblings: unlike analyze_url (creates), enhance_report (modifies), or list_recent_analyses (lists), this retrieves a specific completed report by ID.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'previously completed' implies this follows an analysis job, and the schema description notes job_id comes from analyze_url. However, the main description lacks explicit when-to-use guidance (e.g., 'Use after analyze_url completes') or explicit differentiation from list_recent_analyses.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_recent_analysesAInspect

List the 10 most recent analyses for the authenticated account. Returns URLs, scores, and statuses.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (1-50, default 10).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It successfully discloses return values (URLs, scores, statuses) which compensates for the missing output schema, and mentions 'authenticated account' for auth context. However, it lacks explicit safety disclosure (read-only status), error behavior, or pagination details that annotations would typically provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first sentence establishes the operation and scope, second sentence discloses return values. Every word earns its place. The description is appropriately front-loaded with the core action and efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (1 optional parameter, no nested objects) and absence of output schema, the description adequately compensates by listing the three return value categories. It covers the essential contract (input scope, output shape) for a simple listing operation, though explicit read-only safety confirmation would strengthen it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the 'limit' parameter ('Number of results to return (1-50, default 10)'), the schema carries the semantic weight. The description references the default value ('10 most recent') but adds no additional parameter syntax, validation rules, or usage guidance beyond what the schema already provides. Baseline 3 is appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (List), resource (analyses), scope (10 most recent), and context (authenticated account). It effectively distinguishes from siblings: 'list_recent' implies a collection view versus 'get_report' (specific retrieval), 'analyze_url' (creation), 'enhance_report' (modification), and 'generate_fix_prompt' (generation).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through 'recent analyses' but provides no explicit when-to-use guidance or comparison to alternatives like get_report. The agent must infer that this is for browsing recent activity versus retrieving specific reports. No exclusions or prerequisites are stated beyond the implied authentication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources