PreClick — An MCP-native URL preflight scanning service for autonomous agents.
Server Details
PreClick scans links for threats and confirms intent match with high accuracy before agents click.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- cybrlab-ai/preclick-mcp
- GitHub Stars
- 6
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 6 of 6 tools scored.
The tools have clear distinctions between synchronous and asynchronous operations, but there is significant overlap: url_scanner_scan and url_scanner_scan_with_intent are nearly identical except for intent handling, and the async tools follow a similar pattern. This creates ambiguity about which to use in different scenarios, though descriptions help clarify.
All tool names follow a consistent snake_case pattern with a clear prefix (url_scanner_) and descriptive suffixes (e.g., _async_scan, _async_task_result). The naming is highly predictable and uniform across all six tools, making it easy to understand their relationships.
With 6 tools, the count is reasonable for a URL scanning service, covering both synchronous and asynchronous workflows. It might be slightly over-scoped due to the duplication between sync and async versions, but each tool serves a distinct operational purpose in the set.
The tool set provides complete coverage for URL scanning: it supports both synchronous and asynchronous scanning, with and without intent, plus status checking and result retrieval. There are no obvious gaps; agents can handle all expected workflows from submission to result retrieval.
Available Tools
6 toolsurl_scanner_async_scanARead-onlyInspect
Submit a URL for asynchronous security analysis. Returns immediately with a task_id. Poll with url_scanner_async_task_status to check progress, then url_scanner_async_task_result to get the scan result. Async counterpart of url_scanner_scan for clients without native MCP Tasks support.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to analyze. Must be HTTP or HTTPS. If no scheme provided, https:// is assumed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnlyHint, destructiveHint) and external data access (openWorldHint). Description adds crucial behavioral context not in annotations: the async lifecycle (immediate return, task_id generation, required polling pattern) and temporal expectations. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose front-loaded, procedural workflow second, sibling differentiation third. Efficient use of text to convey async pattern and tool relationships without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking output schema, description compensates by specifying the return value (task_id) and the complete multi-step retrieval process. Adequately covers the complexity of async operations and the ecosystem of related tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for the single 'url' parameter, including protocol assumptions. Description references 'URL' in the purpose statement but does not add semantic details beyond the schema, warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Submit' and resource 'URL for asynchronous security analysis', clearly defining scope. Explicitly identifies itself as 'Async counterpart of url_scanner_scan', distinguishing from both the sync version and the sibling polling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'Returns immediately with a task_id. Poll with url_scanner_async_task_status to check progress, then url_scanner_async_task_result'. Also clarifies selection criteria: 'for clients without native MCP Tasks support', establishing clear when-to-use logic.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
url_scanner_async_scan_with_intentARead-onlyInspect
Submit a URL with optional user intent for asynchronous security analysis. Returns immediately with a task_id. Poll with url_scanner_async_task_status to check progress, then url_scanner_async_task_result to get the scan result. Async counterpart of url_scanner_scan_with_intent for clients without native MCP Tasks support.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to analyze. Must be HTTP or HTTPS. If no scheme provided, https:// is assumed. | |
| intent | No | Optional user intent for visiting the URL. Recommended for additional context. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states 'Submit... Returns immediately with a task_id' and references polling for task status, which implies the creation of a persistent task resource (state modification). This directly contradicts the readOnlyHint=true annotation which indicates no state modification. The description also implies side effects (task creation) while annotations declare the tool is read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences progress logically from purpose to return value to workflow to sibling differentiation. No redundant information; every sentence advances understanding of how to invoke and integrate the tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately explains the immediate return (task_id) and the complete async workflow. It appropriately delegates final result structure to the sibling tool description. Minor gap: could briefly characterize what 'security analysis' entails (malware, phishing, etc.).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents parameter semantics (URL format requirements, optional intent). The description mentions these parameters in context but does not add semantic detail beyond what the schema provides, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Submit a URL... for asynchronous security analysis'), identifies the core resource (URL), and explicitly positions itself as the 'Async counterpart of url_scanner_scan_with_intent', effectively distinguishing from the synchronous sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance ('Poll with url_scanner_async_task_status... then url_scanner_async_task_result') and clearly defines when to use this tool versus alternatives ('for clients without native MCP Tasks support').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
url_scanner_async_task_resultARead-onlyInspect
Retrieve the result of an asynchronous scan task. If completed, returns the full scan result (risk_score, confidence, agent_access_directive, etc.). If still running, returns status with retry_after_ms — call again after that interval. Non-blocking.
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | The task ID to retrieve the result for. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds substantial context beyond annotations: explicitly states 'Non-blocking' (execution trait), explains the dual-state behavior (completed vs running), and enumerates specific return fields (risk_score, confidence, agent_access_directive) despite no output schema. Does not contradict readOnlyHint=true.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four efficient sentences with zero waste: purpose declaration, success case with data preview, pending case with polling instruction, and behavioral trait. Front-loaded with the core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter result-retrieval tool, description is complete. Compensates for missing output schema by listing example result fields and explaining the polling protocol. No gaps given the tool's limited scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (task_id fully described), establishing baseline 3. Description does not explicitly discuss the parameter, but none is needed given complete schema documentation. No additional semantic context provided, but no gap created.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Retrieve') + resource ('result of an asynchronous scan task'). Explicitly references 'asynchronous' to distinguish from synchronous 'url_scanner_scan' siblings, and 'result' distinguishes from 'url_scanner_async_task_status'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear polling guidance ('If still running, returns status with retry_after_ms — call again after that interval'), explaining the async workflow. Lacks explicit contrast with sibling 'url_scanner_async_task_status', though the distinction is implied by emphasizing 'full scan result' retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
url_scanner_async_task_statusARead-onlyInspect
Check the status of an asynchronous scan task. Returns the current task status using native MCP task semantics (working, completed, failed, cancelled) without blocking. Use url_scanner_async_task_result to retrieve the result once completed.
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | The task ID to check status for. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable context beyond readOnlyHint annotation by specifying exact status values returned ('working, completed, failed, cancelled') and explicitly stating 'without blocking,' clarifying the non-blocking polling mechanism.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose declaration, behavioral specification (status values + non-blocking), and usage guidance. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool, adequately compensates for missing output schema by documenting the four possible status return values. Annotations cover safety profile; workflow guidance covers typical usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for the single 'task_id' parameter. The description does not add parameter-specific semantics, which meets the baseline expectation given the complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Check' with clear resource 'status of an asynchronous scan task'. Explicitly distinguishes from sibling url_scanner_async_task_result by contrasting status checking versus result retrieval, and implies distinction from synchronous scan tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'Use url_scanner_async_task_result to retrieve the result once completed,' clearly indicating this tool is for polling status during async operations while naming the specific sibling for the next step.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
url_scanner_scanARead-onlyInspect
Analyze a URL for security threats (synchronous, blocks until complete or timeout). Returns risk score, confidence, agent access guidance, and intent_alignment (always not_provided for this tool; use url_scanner_scan_with_intent for intent context). For long-running scans, prefer url_scanner_async_scan which returns immediately with a task_id for polling via url_scanner_async_task_result.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to analyze. Must be HTTP or HTTPS. If no scheme provided, https:// is assumed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint and destructiveHint, but the description crucially adds 'synchronous, blocks until complete or timeout' which defines the execution model. It also discloses return values (risk score, confidence, agent access guidance, intent_alignment) despite no output schema existing. Minor gap: doesn't specify timeout duration or failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence covers purpose, blocking behavior, and return structure. Second sentence handles alternative selection. Information density is high with no redundant phrases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description admirably documents return values and distinguishes from five sibling tools. Annotations cover safety profile. Minor gap regarding specific timeout thresholds or error handling keeps it from a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description does not add parameter-specific semantics beyond the schema, but the schema already comprehensively documents the URL parameter (HTTP/HTTPS requirements, default scheme behavior).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Analyze' with clear resource 'URL' and purpose 'security threats'. It effectively distinguishes from siblings by noting 'synchronous' vs async variants and explicitly stating 'intent_alignment (always not_provided for this tool; use url_scanner_scan_with_intent for intent context)'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (synchronous scans) and when to prefer alternatives: 'For long-running scans, prefer url_scanner_async_scan' and references polling via 'url_scanner_async_task_result'. Also directs users to 'url_scanner_scan_with_intent' for intent context needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
url_scanner_scan_with_intentARead-onlyInspect
Analyze a URL for security threats with optional user intent context (synchronous, blocks until complete or timeout). Returns risk score, confidence, agent access guidance, and intent_alignment. For long-running scans, prefer url_scanner_async_scan_with_intent which returns immediately with a task_id for polling via url_scanner_async_task_result.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to analyze. Must be HTTP or HTTPS. If no scheme provided, https:// is assumed. | |
| intent | No | Optional user intent for visiting the URL. Recommended for additional context. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and openWorldHint=true. Description adds critical behavioral context: synchronous execution ('blocks until complete or timeout'), specific return values ('risk score, confidence, agent access guidance, intent_alignment'), and timeout behavior. Does not mention rate limits or error states, preventing a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: 1) Purpose + sync behavior, 2) Return values, 3) Alternative recommendation. Front-loaded with critical blocking behavior. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with 100% schema coverage, good annotations, and no output schema, the description is complete. It compensates for missing output schema by listing return fields, and provides sufficient context for agent selection vs 5 sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description mentions 'optional user intent context' which aligns with the intent parameter's optional nature, but does not add semantic meaning beyond what the schema already provides (e.g., no examples of intent strings, no format details for URL beyond schema).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Analyze', resource 'URL', and distinguishing feature 'security threats with optional user intent context'. Clearly differentiates from sibling 'url_scanner_scan' by mentioning intent, and from async variants by noting 'synchronous' nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when-not-to-use: 'For long-running scans, prefer url_scanner_async_scan_with_intent'. Also provides the complete alternative workflow (returns task_id, polling via url_scanner_async_task_result). Mentions blocking behavior which is critical for agent decision-making.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!