pageguard-mcp
OfficialServer Quality Checklist
- Disambiguation5/5
The three tools serve distinctly different purposes: scan_local analyzes codebase dependencies, scan_url analyzes live websites, and generate_docs produces legal documents from scan results. No functional overlap exists between them.
Naming Consistency5/5All tools follow a consistent pageguard_verb_noun snake_case pattern. The scanning tools use parallel naming (scan_local, scan_url) to distinguish targets, while generate_docs clearly indicates its document creation purpose.
Tool Count4/5Three tools is appropriate for this focused domain, covering the essential scan-local, scan-production, and generate-documentation workflow. While functional, the surface is minimal and could benefit from supporting tools like get_scan or list_documents.
Completeness4/5The core workflow is covered: dual scanning capabilities and document generation. Minor gaps exist in document lifecycle management (no retrieval of previous scans or generated documents), but agents can work with immediate return values.
Average 4.3/5 across 3 of 3 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 3 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosure. It successfully documents the return value structure ('ComplianceReport with risk score...'), authentication requirements ('PAGEGUARD_API_KEY env var'), and scope of analysis ('Detects tracking technologies...'). Minor gap: does not mention rate limits, idempotency, or cache behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences each earn their place: (1) core purpose, (2) detection capabilities, (3) return structure, (4) authentication. Information is front-loaded with the primary action in the first sentence. No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description adequately compensates by explaining the return format and authentication mechanism. The single parameter is sufficiently documented in the schema. Minor deduction for not mentioning potential side effects (e.g., network requests to target URL) or prerequisites.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for its single parameter ('The full URL to scan...'). The description mentions 'live website URL' but adds no additional semantic detail (format constraints, validation rules) beyond what the schema already provides, warranting the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Scan') and resource ('live website URL') targeting 'privacy compliance issues.' The phrase 'live website URL' effectively distinguishes this from sibling tool 'pageguard_scan_local' (implying local files), while 'scan' differentiates from 'pageguard_generate_docs'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by specifying 'live website URL' and 'actual deployed site,' implicitly guiding users to choose this over 'scan_local' for local files. However, it does not explicitly state 'when-not' rules or explicitly name sibling alternatives for direct comparison.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses critical behavioral traits: pricing for each document type ($29/$49/etc.), credit consumption requirement, and AI-authored nature of outputs. It lacks details on error states (e.g., insufficient credits) or output format, but covers the essential cost and auth behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently packed with no wasted words, front-loading the core purpose before listing prerequisites and pricing details. The document type list is dense but necessary. A 5 would require better visual separation between prerequisites and pricing, but it earns high marks for information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description comprehensively covers inputs, costs, and prerequisites, but omits what the tool returns (e.g., download URL, raw text, file ID) despite having no output schema. For a paid document generation tool, this output gap is a significant omission, though the input documentation is thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), the description adds substantial semantic value by explaining what each documentType actually contains and costs—information absent from the schema. For example, it clarifies 'single' means '$29 — privacy + terms + cookie' while 'bundle' means '$49 — everything', which is crucial for correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Generate[s] AI-written legal compliance documents' with specific examples (privacy policy, terms of service) and context (for a previously scanned site). It uses a specific verb+resource combination and implicitly distinguishes from sibling scan tools by noting the 'previously scanned site' requirement.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides clear prerequisites: 'Requires a scanId from a prior URL scan' and 'PAGEGUARD_API_KEY with available credits'. This effectively signals when to use the tool (after scanning) and what is needed. It could explicitly name the sibling tool (pageguard_scan_url) to use first, but the guidance is sufficiently clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully communicates operational constraints (no API key/network), file access patterns (package.json, .env, config files), and return value structure (ComplianceReport with specific fields). It does not mention side effects or idempotency, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two information-dense sentences with zero redundancy. Front-loaded with the core action, followed by mechanism, operational requirements, and return value. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and output schema, the description compensates effectively by detailing the return structure (ComplianceReport contents) and operational requirements. For a single-parameter scanning tool, this provides sufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage describing the path parameter, the description adds valuable semantic context about what constitutes a valid target directory (one containing package.json and config files to scan) and implies the nature of the input, exceeding the baseline expectations for fully documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the action (scan), target (local project directory), and mechanism (checking package.json, config files, and .env files against tracking signatures). It clearly distinguishes from sibling pageguard_scan_url by emphasizing 'local' and from pageguard_generate_docs by focusing on detection rather than documentation generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear contextual guidance through 'No API key or network access needed,' implicitly positioning this as the offline/local alternative to pageguard_scan_url. However, it lacks explicit when-not-to-use guidance or direct comparison statements naming the siblings as alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/pageguard/pageguard-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server