Skip to main content
Glama

PreClick — An MCP-native URL preflight scanning service for autonomous agents (formerly URLCheck).

Server Details

PreClick scans links for threats and confirms intent match with high accuracy before agents click.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
cybrlab-ai/preclick-mcp
GitHub Stars
6

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
url_scanner_async_scanA
Read-only
Inspect

Submit a URL for asynchronous security analysis. Returns immediately with a task_id. Poll with url_scanner_async_task_status to check progress, then url_scanner_async_task_result to get the scan result. Async counterpart of url_scanner_scan for clients without native MCP Tasks support.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to analyze. Must be HTTP or HTTPS. If no scheme provided, https:// is assumed.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial behavioral context beyond annotations: explains immediate return with task_id, multi-step polling pattern, and MCP Tasks context. Annotations cover safety profile (readOnlyHint/openWorldHint), so description appropriately focuses on async execution pattern rather than repeating safety flags.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficiently structured sentences covering purpose, immediate return behavior, and complete workflow guidance. Every sentence delivers unique value with zero redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema, description fully compensates by explaining the task_id return value and the complete async lifecycle (submit → status check → result retrieval). Sufficient for an asynchronous submission tool with good input schema coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('The URL to analyze...'), establishing baseline 3. Description does not mention parameters, but given complete schema documentation, no additional parameter semantics are necessary in the description text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Submit a URL for asynchronous security analysis') with clear verb-resource combination. Explicitly identifies as the 'Async counterpart of url_scanner_scan', effectively distinguishing from the synchronous sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance ('Poll with url_scanner_async_task_status...then url_scanner_async_task_result') and clear when-to-use guidance ('for clients without native MCP Tasks support'), directly addressing alternative selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

url_scanner_async_scan_with_intentA
Read-only
Inspect

Submit a URL with optional user intent for asynchronous security analysis. Returns immediately with a task_id. Poll with url_scanner_async_task_status to check progress, then url_scanner_async_task_result to get the scan result. Async counterpart of url_scanner_scan_with_intent for clients without native MCP Tasks support.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to analyze. Must be HTTP or HTTPS. If no scheme provided, https:// is assumed.
intentNoOptional user intent for visiting the URL. Recommended for additional context.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint and openWorldHint, the description adds critical behavioral context not covered by annotations: the immediate return pattern (task_id), the necessity of polling for completion, and the three-step async lifecycle. Does not mention rate limits or error states, but covers the core async behavior well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences: purpose declaration, return behavior/workflow, and sibling differentiation. Zero redundancy. Information is front-loaded with the core action ('Submit a URL...') followed by operational details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description compensates by specifying the exact return value (task_id) and the complete retrieval workflow. Given the 2-parameter input with full schema coverage and present annotations, the description provides sufficient context for correct invocation and result handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both 'url' and 'intent' fully documented in the schema (including protocol handling and optionality). The description references these parameters but does not add syntax details, validation rules, or examples beyond what the schema already provides. Baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool submits a URL for 'asynchronous security analysis'—specific verb, resource, and scope. It clearly distinguishes from siblings by noting it accepts 'optional user intent' (differentiating from url_scanner_async_scan) and is the 'Async counterpart' of the sync version.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Returns immediately with a task_id' and specifies the exact polling sequence using named sibling tools. Explicitly identifies when to use this over alternatives: 'for clients without native MCP Tasks support.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

url_scanner_async_task_resultA
Read-only
Inspect

Retrieve the result of an asynchronous scan task. If completed, returns the full scan result (risk_score, confidence, agent_access_directive, etc.). If still running, returns status with retry_after_ms — call again after that interval. Non-blocking.

ParametersJSON Schema
NameRequiredDescriptionDefault
task_idYesThe task ID to retrieve the result for.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds substantial context beyond annotations: discloses dual-state behavior (completed vs running), lists specific return fields (risk_score, confidence, agent_access_directive), explains polling mechanism (retry_after_ms), and explicitly states 'Non-blocking'. Does not mention error states or rate limits, preventing a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: (1) purpose statement, (2) success case fields, (3) pending case with action instruction, (4) blocking behavior. Front-loaded with core verb, no redundant phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by describing return structure (specific fields for completed, retry_after_ms for pending) and behavioral pattern. Adequate for an async polling tool with good annotations, though explicit mention of prerequisite (must call async_scan first) would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single 'task_id' parameter ('The task ID to retrieve the result for.'). Description does not explicitly reference the parameter or add semantic context (e.g., where to obtain the task_id), but baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Retrieve') + resource ('result of an asynchronous scan task'). Distinguishes from sync siblings (url_scanner_scan) by specifying 'asynchronous', and implies difference from url_scanner_async_task_status by focusing on result retrieval rather than just status checking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit polling guidance ('call again after that interval' when seeing retry_after_ms), establishing the async usage pattern. However, does not explicitly differentiate when to use this versus sibling url_scanner_async_task_status, though the behavioral description implies this tool returns full results while the other likely does not.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

url_scanner_async_task_statusA
Read-only
Inspect

Check the status of an asynchronous scan task. Returns the current task status using native MCP task semantics (working, completed, failed, cancelled) without blocking. Use url_scanner_async_task_result to retrieve the result once completed.

ParametersJSON Schema
NameRequiredDescriptionDefault
task_idYesThe task ID to check status for.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations confirm read-only/non-destructive nature, the description adds valuable behavioral specifics: the exact status values returned (working, completed, failed, cancelled) and the non-blocking execution model. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: sentence 1 states purpose, sentence 2 discloses return values and blocking behavior, sentence 3 provides sibling guidance. Perfectly front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter status checker with safety annotations, the description is complete. It compensates for missing output schema by enumerating possible status values. Minor gap: doesn't mention error handling for invalid task_ids.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (task_id fully described), establishing baseline 3. Description implies the parameter by referencing 'task' but does not add formatting details, validation rules, or sourcing guidance for the task_id beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Check' and clear resource 'status of an asynchronous scan task'. Explicitly distinguishes from sibling url_scanner_async_task_result by stating this only checks status while the sibling retrieves results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use the sibling tool ('Use url_scanner_async_task_result to retrieve the result once completed'), establishing the correct workflow pattern: check status first, fetch results only when completed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

url_scanner_scanA
Read-only
Inspect

Analyze a URL for security threats (synchronous, blocks until complete or timeout). Returns risk score, confidence, agent access guidance, and intent_alignment (always not_provided for this tool; use url_scanner_scan_with_intent for intent context). For long-running scans, prefer url_scanner_async_scan which returns immediately with a task_id for polling via url_scanner_async_task_result.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to analyze. Must be HTTP or HTTPS. If no scheme provided, https:// is assumed.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses that the tool is synchronous and blocks, mentions timeout behavior, and describes return values (risk score, confidence, etc.). While annotations cover read-only and non-destructive aspects, the description enriches understanding with operational details like blocking nature and output structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states purpose and key behavioral traits, the second provides clear usage alternatives. Every phrase adds value without redundancy, making it easy to parse and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (security scanning with blocking behavior), annotations cover safety aspects, and schema fully documents inputs. The description compensates for lack of output schema by detailing return values. It could slightly improve by mentioning error cases or specific timeout duration, but overall it's quite complete for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents the single 'url' parameter. The description doesn't add any parameter-specific details beyond what's in the schema (e.g., no extra format constraints or examples), meeting the baseline of 3 when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Analyze a URL for security threats') and resource ('URL'), distinguishing it from siblings by specifying it's synchronous and returns risk score, confidence, etc. It explicitly differentiates from url_scanner_scan_with_intent by noting intent_alignment is 'always not_provided' for this tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it recommends url_scanner_async_scan for long-running scans and url_scanner_scan_with_intent for intent context. It also specifies this tool blocks until complete or timeout, helping users choose based on performance needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

url_scanner_scan_with_intentA
Read-only
Inspect

Analyze a URL for security threats with optional user intent context (synchronous, blocks until complete or timeout). Returns risk score, confidence, agent access guidance, and intent_alignment. For long-running scans, prefer url_scanner_async_scan_with_intent which returns immediately with a task_id for polling via url_scanner_async_task_result.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to analyze. Must be HTTP or HTTPS. If no scheme provided, https:// is assumed.
intentNoOptional user intent for visiting the URL. Recommended for additional context.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds valuable behavioral context: it's synchronous, blocks until completion or timeout, and returns specific outputs (risk score, confidence, etc.). This enhances transparency beyond annotations without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with core functionality, followed by critical behavioral details and sibling tool guidance. Both sentences earn their place by providing essential information without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (security scanning with intent) and lack of output schema, the description adequately covers purpose, behavior, and alternatives. However, it could briefly mention error handling or timeout specifics for full completeness, though annotations help mitigate gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the two parameters (url and intent). The description does not add meaning beyond the schema, as it doesn't elaborate on parameter usage or constraints. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Analyze a URL for security threats') and resource ('URL'), distinguishing it from siblings by mentioning optional user intent context. It explicitly contrasts with async alternatives, making the purpose distinct and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs. alternatives: it specifies this is synchronous and blocks until complete or timeout, and recommends using url_scanner_async_scan_with_intent for long-running scans. This directly addresses sibling tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.