Sovereignty Scan
Server Details
EU AI Act sovereignty scanning. Provider residency, registration status, audit trail support.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- kajaril/sovereignty-scan-mcp
- GitHub Stars
- 1
- Server Listing
- sovereignty-scan-mcp
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 5 of 5 tools scored.
Each tool has a clearly distinct purpose: listing providers under US CLOUD Act, listing all providers with filter, scanning a single provider's detailed profile, aggregating a stack's summary, and suggesting EU alternatives. No overlap or ambiguity.
All tool names follow a consistent verb_noun pattern (e.g., get_us_cloud_act_providers, list_providers, scan_provider, scan_stack, suggest_eu_alternatives). The naming is predictable and uniform.
With 5 tools, the server is well-scoped for its domain. It covers listing, detailed scanning, aggregation, and suggestions without being too sparse or bloated.
The tool set covers essential queries for sovereignty scanning: listing, detailed provider info, stack analysis, and EU alternatives. A minor gap is the lack of an explicit category listing tool, but the category filter on list_providers mitigates this.
Available Tools
5 toolsget_us_cloud_act_providersAInspect
Returns all providers subject to US CLOUD Act compelled disclosure (18 U.S.C. § 2713).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It states 'Returns' indicating a read-only operation with no side effects, which is sufficient for a simple list tool. However, it lacks explicit statements about authentication, rate limits, or data freshness, but given the tool's simplicity, a minor gap exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no wasted words. It includes the legal citation for precision, which is relevant. Front-loads the core purpose effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple: no parameters, no output schema. The description fully specifies what is returned (all providers subject to US CLOUD Act). It covers the essential context needed for an agent to select and invoke this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters, and the input schema is empty with 100% coverage. Per guidelines, 0 params baseline is 4. The description adds no parameter info because none exist, which is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Returns' and clearly identifies the resource: 'all providers subject to US CLOUD Act compelled disclosure (18 U.S.C. § 2713)'. This distinguishes it from sibling tools like list_providers (generic list) or suggest_eu_alternatives (EU-focused).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing US CLOUD Act providers but offers no explicit 'when to use' or 'when not to use' guidance, nor mentions alternative tools like suggest_eu_alternatives. Usage context is implied but not elaborated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_providersAInspect
List all tracked providers, with optional category filter.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Filter by category: AI, Hosting, Database, Auth, Analytics, Observability, CI/CD, Communications, Payments, Search, Sandbox, Cache |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It does not disclose behavioral aspects such as authentication requirements, rate limits, pagination, or response format. For a read-only tool, the description should at least hint at the nature of the response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that is succinct and front-loaded. Every word serves a purpose with no filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (one optional parameter, no output schema), the description communicates the core function. However, it could note the absence of pagination or ordering details, but for a simple list tool, this is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already sufficiently documents the 'category' parameter. The description merely reiterates it as 'optional category filter', adding minimal extra meaning. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and the resource 'tracked providers', with an optional category filter. It effectively distinguishes from sibling tools like 'scan_provider' which imply different actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing providers, but provides no explicit guidance on when to use this tool over alternatives like 'scan_provider' or 'suggest_eu_alternatives'. The usage context is implicit but not fully elaborated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_providerAInspect
Returns the full jurisdictional profile for a single vendor: headquarters country, data residency regions, EU residency option, US CLOUD Act exposure, GDPR DPA availability, and legal framework.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Provider name (case-insensitive) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It describes the output comprehensively (headquarters, data residency, etc.), indicating read-only behavior. It does not mention permissions, rate limits, or side effects, but the output details are sufficient for a simple query tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that immediately states the tool's purpose and enumerates the output fields. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter and no output schema, the description fully explains what the tool returns, listing all relevant fields. It is complete for the intended use case of retrieving a vendor's jurisdictional profile.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'name' described as 'Provider name (case-insensitive)'. The description adds no additional meaning beyond the schema, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool returns the full jurisdictional profile for a single vendor, listing specific fields (headquarters, data residency, etc.). It distinguishes from siblings like list_providers (which lists all) and scan_stack (likely for stacks).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for obtaining detailed jurisdiction info on one provider, but does not explicitly mention when to use it over siblings like get_us_cloud_act_providers or suggest_eu_alternatives. Context is clear but lacks explicit alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_stackAInspect
Aggregate jurisdictional summary for a stack of providers: CLOUD Act exposure count, EU residency coverage, and missing DPAs. Maximum 50 providers per call.
| Name | Required | Description | Default |
|---|---|---|---|
| providers | Yes | Provider names to scan (maximum 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses output metrics but does not mention whether the tool is read-only, requires authentication, or has other behavioral traits. The impact of aggregating a large stack is not addressed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently states the tool's purpose and a key limitation (maximum 50 providers). No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there is no output schema, the description partially explains return values by listing three metrics, but lacks details on format, ordering, or error handling. The parameter is well-documented by the schema, so overall completeness is adequate but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage describing the 'providers' parameter as an array of strings with maxItems 50. The description adds value by explaining that the output includes specific jurisdictional metrics (CLOUD Act, EU residency, DPAs), which helps the agent understand what the parameter is used for.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool aggregates a jurisdictional summary for a stack of providers, listing specific metrics (CLOUD Act exposure count, EU residency coverage, missing DPAs). This distinguishes it from siblings like scan_provider (single provider) or list_providers (listing all).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions a maximum of 50 providers per call, implying batch usage, but does not explicitly state when to use this tool versus alternatives like scan_provider for single providers or get_us_cloud_act_providers for US-specific data. Usage context is implicit but lacks exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggest_eu_alternativesAInspect
Returns EU/EEA/UK/CH-based alternatives in the same category as the given provider. Alternatives have eu_residency_option=true and headquarters in an EU member state, EEA country, UK, or Switzerland. Capped at 10.
| Name | Required | Description | Default |
|---|---|---|---|
| provider_name | Yes | Provider to find alternatives for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses that results are capped at 10 and defines the conditions for alternatives (eu_residency_option=true and headquarters location). This provides behavioral context beyond the input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, both essential and front-loaded. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity, the description covers the basics (what, constraints, parameters). However, it lacks information about output format or error behavior, and no output schema is provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter, so description adds minimal extra meaning beyond confirming the parameter is the provider to find alternatives for. The baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns EU/EEA/UK/CH-based alternatives for a given provider, which is distinct from sibling tools like get_us_cloud_act_providers or scan_provider.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the tool returns alternatives in the same category as the provider, with specific criteria for alternatives. However, it does not explicitly guide when to use this tool versus siblings, nor what to do if no alternatives exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.