Skip to main content
Glama

Server Details

EU-hosted website monitoring + 17-framework compliance MCP. One anonymous tool, four authenticated.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
gweber/siteguardian-mcp-examples
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.8/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct operation: current status, drift history, fix recommendations, domain inventory, and one-off scanning. No functional overlap.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (e.g., get_domain_status, scan_domain), making them predictable and easy to distinguish.

Tool Count5/5

With 5 tools, the server is well-scoped for a security monitoring service, covering essential operations without being too sparse or bloated.

Completeness4/5

The tool surface covers core read operations and fix recommendations. A minor gap is the absence of tools to add/remove monitoring subscriptions, but those may be handled externally.

Available Tools

5 tools
get_domain_statusA
Read-onlyIdempotent
Inspect

Returns the current security grade (A–F), last-scan timestamp, and list of active issues for a domain that is ALREADY under SiteGuardian monitoring by the authenticated account. Each issue carries a stable id, a severity, a short title, and an impact description. The response also includes a relative dashboard URL.

Use this when the user asks about the current state of a specific monitored domain, wants to confirm a recent change landed, or needs issue ids to call get_fix_recommendations with a specific issue_id.

Do NOT use this for domains not yet under monitoring — it will return a domain_not_monitored error; call scan_domain for one-off checks instead. Compliance framework tags (NIS2 / GDPR / DORA) are NOT included in v1; framework tagging on the monitored-domain path is tracked as a follow-up. Requires a valid API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
gradeNo
domainYes
last_scan_atNo
active_issuesYes
dashboard_urlYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and idempotentHint=true. Description adds useful details: returns relative dashboard URL, each issue has stable id, severity, title, impact description, and explicitly notes that compliance framework tags are NOT included in v1. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with three clear paragraphs. Could be slightly more concise, but every sentence adds value. Front-loaded with key outputs and error conditions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Output schema exists, but description already covers return fields (grade, timestamp, issues with id/severity/title/impact, dashboard URL). For a 1-param tool with good annotations, this is fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has one required parameter (domain) with 0% description coverage. Description adds critical meaning: domain must be already under monitoring, otherwise error. This compensates for schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it returns security grade, last-scan timestamp, and active issues for a monitored domain. Specific verb (returns) and resource (domain status). Distinguishes from sibling scan_domain which handles unmonitored domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (user asks about current state, confirm changes, need issue IDs) and when not to use (unmonitored domains, call scan_domain instead). Also mentions error condition (domain_not_monitored).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_drift_eventsA
Read-onlyIdempotent
Inspect

Returns recent configuration drift events for a domain under monitoring by the authenticated account — TLS changes, DNSSEC state changes, new or removed security headers, shifts in third-party JS hosts, new cookies. Each event carries its observed-at timestamp, a kind (tls/dnssec/cookies/js_hosts/headers), a severity classified centrally (high for tls/dnssec/headers, medium for cookies/js_hosts, otherwise low), a short summary, and a sanitised detail payload.

Use this when the user asks 'what changed' on a domain, wants to audit recent posture shifts, or is diagnosing an unexpected issue. Pair it with get_domain_status to see the current state and get_drift_events to see how it got there.

Do NOT use this for a domain that is not under monitoring — you'll get a domain_not_monitored error; monitoring has to be active for the drift history to accumulate. Optional since (ISO-8601) and limit (1..100) params narrow the window. Requires a valid API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
sinceNo
domainYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
domainYes
eventsYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds critical behavioral context: requires active monitoring (otherwise error), severity classification logic, and optional parameter effects. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is somewhat verbose (listing all event kinds and field details) but well-structured: main purpose first, then usage guidance, caution, and param hints. It could be slightly trimmed without losing value, but remains efficient for an agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 params, 1 required) and the presence of an output schema, the description covers all necessary aspects: return fields, error conditions, parameter hints, and sibling differentiation. It provides comprehensive context for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains the 'since' parameter as ISO-8601 and 'limit' as 1..100, and the 'domain' parameter is implied from context. While it does not detail every parameter's schema format, it provides sufficient guidance for their purpose and constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it returns configuration drift events for a domain, listing specific event types (TLS, DNSSEC, headers, etc.) and their included fields (timestamp, kind, severity, summary, detail). It distinguishes from the sibling tool get_domain_status by explaining the pairing use case.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly states when to use ('when the user asks what changed', 'audit recent posture shifts', 'diagnose unexpected issues') and when not to use ('Do NOT use for a domain not under monitoring'), including an explicit error condition. It also suggests pairing with get_domain_status, providing clear guidance on alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_fix_recommendationsA
Read-onlyIdempotent
Inspect

Returns copy-paste-ready fix recommendations (nginx, Apache, DNS, shell) for the issues found on a domain the caller has already paid for — either an active Monitor/Compliance subscription covering the domain, OR a purchased one-off Report for the domain. Each recommendation carries a stable issue_id, a priority (high/medium/low), a title, prose instructions, one or more config snippets with the target domain already interpolated, a verify command, and a category tag.

Use this when the user asks how to fix an issue, wants the exact config to apply, or needs to verify a fix worked. Pass the optional issue_id to scope the response to one specific finding. The response is read-only — this tool NEVER triggers a fresh scan; fixes are computed from the most recent stored scan (including the Report-included re-scan if that was used).

Do NOT use this for domains the caller hasn't purchased coverage for — you'll get an upgrade_required error that links to the pricing page. Do NOT use this to run or trigger a scan; call scan_domain for anonymous checks. Requires a valid API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYes
issue_idNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
domainYes
sourceYesWhich purchased product backed this response — 'monitor' for a subscription, 'report' for a one-off Report.
scanned_atYesTimestamp of the underlying scan the fixes were computed from.
recommendationsYesPriority-sorted list of actionable fixes.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, destructiveHint, idempotentHint. Description adds that the tool is read-only, never triggers a fresh scan, requires valid API key, and fixes come from most recent stored scan. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two paragraphs, front-loaded with key purpose and contents. Efficient, but could trim redundant phrasing like 'the caller has already paid for' slightly. Still concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 parameters, output schema exists, annotations cover behavior), the description covers what it returns, when to use, prerequisites, error case, and response field details. Complete for agent selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has two parameters with 0% description coverage. Description explains that 'issue_id' is optional and scopes response to one finding, adding meaning beyond schema. However, does not describe 'domain' parameter beyond the main description context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns 'copy-paste-ready fix recommendations' for issues on paid domains, listing the types (nginx, Apache, DNS, shell) and what each recommendation contains. This is distinct from siblings like 'scan_domain' (trigger scans) or 'get_domain_status' (status).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use: 'when the user asks how to fix an issue' and when not to: 'Do NOT use this for domains the caller hasn't purchased coverage for' and 'Do NOT use this to run or trigger a scan; call scan_domain'. Provides clear alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_monitored_domainsA
Read-onlyIdempotent
Inspect

Returns the full list of domains under continuous SiteGuardian monitoring for the authenticated account. Each entry includes the domain, current security grade (A–F), timestamp of the last completed scan, and a relative dashboard URL.

Use this when the user asks what they are monitoring, wants an inventory summary, or needs to look up a specific domain's exact spelling before calling get_domain_status / get_drift_events / get_fix_recommendations. The list is scoped entirely by the API key — there is no filter parameter to widen or narrow the result.

Do NOT use this to enumerate domains the user does not own or monitor — it only returns their own inventory. Do NOT call it to trigger a scan (it does not); use scan_domain for one-off checks. Requires a valid API key.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
domainsYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses that the list is scoped by API key with no filter parameters, and that it does not trigger scans. These are beyond annotations (readOnlyHint, idempotentHint, destructiveHint) and add important behavioral context. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three clear paragraphs: purpose, usage guidance, and prohibitions. No fluff; every sentence adds value. Front-loaded with core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 0-parameter tool with output schema, the description provides complete context: what output contains, when to use, and limitations. No missing critical information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters, so schema coverage is 100% inherently. The description adds meaning by explicitly stating there are no filter parameters and that the result is scoped by API key, which clarifies the tool's fixed behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns the full list of monitored domains with specific fields (domain, grade, timestamp, dashboard URL). It uses a specific verb ('Returns') and resource ('list of domains'), and distinguishes from siblings by mentioning alternative tools for other use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use (user asks about monitoring, needs inventory, or wants exact spelling before other calls) and when not to use (do not enumerate external domains, do not trigger scans). Names alternative tool scan_domain for scans.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scan_domainA
Read-onlyIdempotent
Inspect

Runs a free one-off security scan of the given domain and returns its grade (A–F), scan timestamp, and up to three top-priority issues with a permalink to the full report on siteguardian.io.

Use this when the user asks for a quick security check of a domain that is NOT yet under SiteGuardian monitoring, or when they want a fresh assessment before subscribing. Results are cached for two hours, so repeated calls about the same domain return the same snapshot and mark it with cached=True.

Do NOT use this for domains already under monitoring by the user — call get_domain_status instead for the account-scoped view with framework tags. Do NOT use this to batch-scan many domains as a competitive-intelligence tool; per-source-IP and per-target rate limits bound usage. This tool does not require authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
gradeYes
scoreYes
cachedYesTrue when the result was served from the 2-hour ScanLog cache.
domainYes
report_urlYesPermalink to the full scan report on siteguardian.io.
scanned_atYes
top_issuesYesUp to 3 top-priority issues found. Sorted high → medium → low.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, openWorld, idempotent, non-destructive. Description adds caching details (2 hours, cached=True), rate limits, and no-auth requirement, which are beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well structured: first sentence states core function, second gives usage context, third caching, fourth exclusions. No redundant info.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given output schema exists, description is sufficient. It covers purpose, when to use, behavioral details, and adds value beyond structured fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% coverage for domain parameter. Description fully explains it expects a domain string and details the output, compensating completely.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool runs a security scan, returns grade, timestamp, top issues, and a permalink. Distinguishes from sibling tool get_domain_status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (quick check for non-monitored domains, fresh assessment) and when not (for monitored domains, batch scanning). Mention of rate limits and caching provides clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.