check_urls
Analyze multiple URLs in one request. Get structured security signals across seven dimensions: redirects, brand detection, domain intelligence, SSL, parked domains, URL structure, and DNS. Results handled automatically.
Instructions
Check multiple URLs in a single batch. Returns results for all URLs, handling async processing automatically.
Each URL is analysed across seven dimensions: redirect behaviour, brand impersonation, domain intelligence (age, registrar, expiration, status codes, nameservers via RDAP), SSL/TLS validity, parked domain detection, URL structural analysis, and DNS enrichment. Known and cached URLs return results immediately. Unknown URLs are queued for pipeline processing. This tool automatically polls for results until all URLs are complete or the 5-minute timeout is reached. You don't need to manage polling or job tracking.
If the timeout is reached before all results are complete, returns whatever is available with a clear message indicating which URLs are still processing. The user can check results later via check_history.
Maximum 500 URLs per call. For larger datasets, call this tool multiple times with chunks of up to 500 URLs.
Billing: Same as check_url. Known and cached domains are free. Only unknown domains running through the full pipeline cost 1 credit each. The summary shows pipeline_checks_charged (the actual number of credits consumed). If you don't have enough credits for the unknowns in the batch, the entire batch is rejected with a 402 error telling you exactly how many credits are needed.
Duplicate URLs in the list are automatically deduplicated (processed once, charged once). Invalid URLs get individual error status without rejecting the batch.
Use the "profile" parameter to score all results with custom weights.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| urls | Yes | List of URLs to check (maximum 500 per call) | |
| profile | No | Name of a custom scoring profile to use for all URLs (optional) |
Implementation Reference
- src/tools/batch.ts:21-147 (registration)Registration of the 'check_urls' tool using server.registerTool(). Binds the handler, input schema, and description.
export function registerBatchTool(server: McpServer, api: UnphurlAPI): void { server.registerTool( "check_urls", { description: `Check multiple URLs in a single batch. Returns results for all URLs, handling async processing automatically. Each URL is analysed across seven dimensions: redirect behaviour, brand impersonation, domain intelligence (age, registrar, expiration, status codes, nameservers via RDAP), SSL/TLS validity, parked domain detection, URL structural analysis, and DNS enrichment. Known and cached URLs return results immediately. Unknown URLs are queued for pipeline processing. This tool automatically polls for results until all URLs are complete or the 5-minute timeout is reached. You don't need to manage polling or job tracking. If the timeout is reached before all results are complete, returns whatever is available with a clear message indicating which URLs are still processing. The user can check results later via check_history. Maximum 500 URLs per call. For larger datasets, call this tool multiple times with chunks of up to 500 URLs. Billing: Same as check_url. Known and cached domains are free. Only unknown domains running through the full pipeline cost 1 credit each. The summary shows pipeline_checks_charged (the actual number of credits consumed). If you don't have enough credits for the unknowns in the batch, the entire batch is rejected with a 402 error telling you exactly how many credits are needed. Duplicate URLs in the list are automatically deduplicated (processed once, charged once). Invalid URLs get individual error status without rejecting the batch. Use the "profile" parameter to score all results with custom weights.`, inputSchema: { urls: z .array(z.string().url().max(2048)) .min(1) .max(500) .describe("List of URLs to check (maximum 500 per call)"), profile: z .string() .optional() .describe( "Name of a custom scoring profile to use for all URLs (optional)" ), }, }, async ({ urls, profile }, extra) => { if (!api.hasApiKey) return authError(); try { // Step 1: Submit the batch const batchResponse = await api.batchCheck(urls, profile); // Step 2: If no job_id, everything resolved from cache/Tranco — return immediately if (!batchResponse.job_id) { return successResult(batchResponse); } // Step 3: Poll for async results const startTime = Date.now(); const progressToken = extra?._meta?.progressToken; let jobResponse = await api.pollJob(batchResponse.job_id); while (jobResponse.status !== "completed") { // Check timeout before sleeping if (Date.now() - startTime > TIMEOUT_MS) { break; } await sleep(POLL_INTERVAL_MS); jobResponse = await api.pollJob(batchResponse.job_id); // Send progress notification if the client supports it if (progressToken !== undefined) { const completed = jobResponse.summary.completed ?? 0; const total = jobResponse.summary.total ?? urls.length; try { await extra.sendNotification({ method: "notifications/progress" as const, params: { progressToken, progress: completed, total, }, }); } catch { // Client may not support progress notifications — that's fine, skip silently } } } // Step 4: Merge batch response (known/cached) with job response (pipeline results) // Build a lookup from the job response for URLs that were processed async const jobResultMap = new Map<string, BatchResultItem>(); for (const item of jobResponse.results) { jobResultMap.set(item.url, item); } // Replace pending items in the original batch response with completed results const mergedResults = batchResponse.results.map((item) => { if (item.status === "pending" && jobResultMap.has(item.url)) { return jobResultMap.get(item.url)!; } return item; }); // Step 5: Build unified summary const complete = mergedResults.filter( (r) => r.status === "complete" || r.status === "completed" ).length; const pending = mergedResults.filter( (r) => r.status === "pending" ).length; const failed = mergedResults.filter( (r) => r.status === "error" || r.status === "failed" ).length; const result: Record<string, unknown> = { results: mergedResults, summary: { total: mergedResults.length, complete, pending, failed, pipeline_checks_charged: jobResponse.summary.pipeline_checks_charged ?? 0, }, }; // Flag partial results if timeout was reached if (pending > 0) { result.message = `Timeout reached after 5 minutes. ${pending} URL(s) still processing. Check results later via check_history.`; } return successResult(result); } catch (err) { if (err instanceof ApiRequestError) return apiErrorToResult(err); return errorResult(err instanceof Error ? err.message : "Unknown error"); } } ); } - src/tools/batch.ts:52-146 (handler)The handler function for check_urls. Submits batch via API, polls for async results with progress notifications, merges cached/known results with pipeline results, and returns unified response.
async ({ urls, profile }, extra) => { if (!api.hasApiKey) return authError(); try { // Step 1: Submit the batch const batchResponse = await api.batchCheck(urls, profile); // Step 2: If no job_id, everything resolved from cache/Tranco — return immediately if (!batchResponse.job_id) { return successResult(batchResponse); } // Step 3: Poll for async results const startTime = Date.now(); const progressToken = extra?._meta?.progressToken; let jobResponse = await api.pollJob(batchResponse.job_id); while (jobResponse.status !== "completed") { // Check timeout before sleeping if (Date.now() - startTime > TIMEOUT_MS) { break; } await sleep(POLL_INTERVAL_MS); jobResponse = await api.pollJob(batchResponse.job_id); // Send progress notification if the client supports it if (progressToken !== undefined) { const completed = jobResponse.summary.completed ?? 0; const total = jobResponse.summary.total ?? urls.length; try { await extra.sendNotification({ method: "notifications/progress" as const, params: { progressToken, progress: completed, total, }, }); } catch { // Client may not support progress notifications — that's fine, skip silently } } } // Step 4: Merge batch response (known/cached) with job response (pipeline results) // Build a lookup from the job response for URLs that were processed async const jobResultMap = new Map<string, BatchResultItem>(); for (const item of jobResponse.results) { jobResultMap.set(item.url, item); } // Replace pending items in the original batch response with completed results const mergedResults = batchResponse.results.map((item) => { if (item.status === "pending" && jobResultMap.has(item.url)) { return jobResultMap.get(item.url)!; } return item; }); // Step 5: Build unified summary const complete = mergedResults.filter( (r) => r.status === "complete" || r.status === "completed" ).length; const pending = mergedResults.filter( (r) => r.status === "pending" ).length; const failed = mergedResults.filter( (r) => r.status === "error" || r.status === "failed" ).length; const result: Record<string, unknown> = { results: mergedResults, summary: { total: mergedResults.length, complete, pending, failed, pipeline_checks_charged: jobResponse.summary.pipeline_checks_charged ?? 0, }, }; // Flag partial results if timeout was reached if (pending > 0) { result.message = `Timeout reached after 5 minutes. ${pending} URL(s) still processing. Check results later via check_history.`; } return successResult(result); } catch (err) { if (err instanceof ApiRequestError) return apiErrorToResult(err); return errorResult(err instanceof Error ? err.message : "Unknown error"); } } ); - src/tools/batch.ts:38-50 (schema)Input schema validation for check_urls using Zod. Defines 'urls' (array of URLs, 1-500) and optional 'profile' (string for custom scoring).
inputSchema: { urls: z .array(z.string().url().max(2048)) .min(1) .max(500) .describe("List of URLs to check (maximum 500 per call)"), profile: z .string() .optional() .describe( "Name of a custom scoring profile to use for all URLs (optional)" ), }, - src/tools/helpers.ts:1-73 (helper)Helper utilities used by check_urls handler: successResult, errorResult, authError, apiErrorToResult, and sleep for polling.
// Shared utilities for MCP tool handlers // Provides consistent success/error formatting across all tools import type { CallToolResult } from "@modelcontextprotocol/sdk/types.js"; import { ApiRequestError } from "../api.js"; // Wrap any data as a successful MCP tool result export function successResult(data: unknown): CallToolResult { return { content: [{ type: "text", text: JSON.stringify(data, null, 2) }], }; } // Return a plain error message as an MCP tool error export function errorResult(message: string): CallToolResult { return { content: [{ type: "text", text: JSON.stringify({ error: message }) }], isError: true, }; } // Standard error for tools that require an API key but none is configured export function authError(): CallToolResult { return { content: [ { type: "text", text: JSON.stringify({ error: "auth_required", message: "API key is missing. Set UNPHURL_API_KEY in your MCP server configuration, or use the signup tool to create an account first.", }), }, ], isError: true, }; } // Convert an API error into an MCP tool error // Special-cases 402 (insufficient credits) to prompt the agent toward the purchase tool export function apiErrorToResult(err: ApiRequestError): CallToolResult { const body = err.apiError; if (err.status === 402) { return { content: [ { type: "text", text: JSON.stringify( { ...body, _hint: "Use the purchase tool to buy more credits, or get_pricing to see available packages.", }, null, 2 ), }, ], isError: true, }; } return { content: [{ type: "text", text: JSON.stringify(body, null, 2) }], isError: true, }; } // Promise-based sleep for polling loops export function sleep(ms: number): Promise<void> { return new Promise((resolve) => setTimeout(resolve, ms)); } - src/api.ts:135-146 (helper)API client methods used by check_urls: batchCheck() submits URLs for batch processing, and pollJob() polls async job results.
async batchCheck(urls: string[], profile?: string): Promise<BatchResponse> { const body: Record<string, unknown> = { urls }; if (profile) body.profile = profile; return this.doRequest<BatchResponse>("POST", "/v1/check/batch", body); } async pollJob(jobId: string): Promise<JobResponse> { return this.doRequest<JobResponse>( "GET", `/v1/jobs/${encodeURIComponent(jobId)}` ); }