Skip to main content
Glama
123Ergo

unphurl-mcp

check_urls

Analyze multiple URLs in one request. Get structured security signals across seven dimensions: redirects, brand detection, domain intelligence, SSL, parked domains, URL structure, and DNS. Results handled automatically.

Instructions

Check multiple URLs in a single batch. Returns results for all URLs, handling async processing automatically.

Each URL is analysed across seven dimensions: redirect behaviour, brand impersonation, domain intelligence (age, registrar, expiration, status codes, nameservers via RDAP), SSL/TLS validity, parked domain detection, URL structural analysis, and DNS enrichment. Known and cached URLs return results immediately. Unknown URLs are queued for pipeline processing. This tool automatically polls for results until all URLs are complete or the 5-minute timeout is reached. You don't need to manage polling or job tracking.

If the timeout is reached before all results are complete, returns whatever is available with a clear message indicating which URLs are still processing. The user can check results later via check_history.

Maximum 500 URLs per call. For larger datasets, call this tool multiple times with chunks of up to 500 URLs.

Billing: Same as check_url. Known and cached domains are free. Only unknown domains running through the full pipeline cost 1 credit each. The summary shows pipeline_checks_charged (the actual number of credits consumed). If you don't have enough credits for the unknowns in the batch, the entire batch is rejected with a 402 error telling you exactly how many credits are needed.

Duplicate URLs in the list are automatically deduplicated (processed once, charged once). Invalid URLs get individual error status without rejecting the batch.

Use the "profile" parameter to score all results with custom weights.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlsYesList of URLs to check (maximum 500 per call)
profileNoName of a custom scoring profile to use for all URLs (optional)

Implementation Reference

  • Registration of the 'check_urls' tool using server.registerTool(). Binds the handler, input schema, and description.
    export function registerBatchTool(server: McpServer, api: UnphurlAPI): void {
      server.registerTool(
        "check_urls",
        {
          description: `Check multiple URLs in a single batch. Returns results for all URLs, handling async processing automatically.
    
    Each URL is analysed across seven dimensions: redirect behaviour, brand impersonation, domain intelligence (age, registrar, expiration, status codes, nameservers via RDAP), SSL/TLS validity, parked domain detection, URL structural analysis, and DNS enrichment. Known and cached URLs return results immediately. Unknown URLs are queued for pipeline processing. This tool automatically polls for results until all URLs are complete or the 5-minute timeout is reached. You don't need to manage polling or job tracking.
    
    If the timeout is reached before all results are complete, returns whatever is available with a clear message indicating which URLs are still processing. The user can check results later via check_history.
    
    Maximum 500 URLs per call. For larger datasets, call this tool multiple times with chunks of up to 500 URLs.
    
    Billing: Same as check_url. Known and cached domains are free. Only unknown domains running through the full pipeline cost 1 credit each. The summary shows pipeline_checks_charged (the actual number of credits consumed). If you don't have enough credits for the unknowns in the batch, the entire batch is rejected with a 402 error telling you exactly how many credits are needed.
    
    Duplicate URLs in the list are automatically deduplicated (processed once, charged once). Invalid URLs get individual error status without rejecting the batch.
    
    Use the "profile" parameter to score all results with custom weights.`,
          inputSchema: {
            urls: z
              .array(z.string().url().max(2048))
              .min(1)
              .max(500)
              .describe("List of URLs to check (maximum 500 per call)"),
            profile: z
              .string()
              .optional()
              .describe(
                "Name of a custom scoring profile to use for all URLs (optional)"
              ),
          },
        },
        async ({ urls, profile }, extra) => {
          if (!api.hasApiKey) return authError();
    
          try {
            // Step 1: Submit the batch
            const batchResponse = await api.batchCheck(urls, profile);
    
            // Step 2: If no job_id, everything resolved from cache/Tranco — return immediately
            if (!batchResponse.job_id) {
              return successResult(batchResponse);
            }
    
            // Step 3: Poll for async results
            const startTime = Date.now();
            const progressToken = extra?._meta?.progressToken;
            let jobResponse = await api.pollJob(batchResponse.job_id);
    
            while (jobResponse.status !== "completed") {
              // Check timeout before sleeping
              if (Date.now() - startTime > TIMEOUT_MS) {
                break;
              }
    
              await sleep(POLL_INTERVAL_MS);
              jobResponse = await api.pollJob(batchResponse.job_id);
    
              // Send progress notification if the client supports it
              if (progressToken !== undefined) {
                const completed = jobResponse.summary.completed ?? 0;
                const total = jobResponse.summary.total ?? urls.length;
                try {
                  await extra.sendNotification({
                    method: "notifications/progress" as const,
                    params: {
                      progressToken,
                      progress: completed,
                      total,
                    },
                  });
                } catch {
                  // Client may not support progress notifications — that's fine, skip silently
                }
              }
            }
    
            // Step 4: Merge batch response (known/cached) with job response (pipeline results)
            // Build a lookup from the job response for URLs that were processed async
            const jobResultMap = new Map<string, BatchResultItem>();
            for (const item of jobResponse.results) {
              jobResultMap.set(item.url, item);
            }
    
            // Replace pending items in the original batch response with completed results
            const mergedResults = batchResponse.results.map((item) => {
              if (item.status === "pending" && jobResultMap.has(item.url)) {
                return jobResultMap.get(item.url)!;
              }
              return item;
            });
    
            // Step 5: Build unified summary
            const complete = mergedResults.filter(
              (r) => r.status === "complete" || r.status === "completed"
            ).length;
            const pending = mergedResults.filter(
              (r) => r.status === "pending"
            ).length;
            const failed = mergedResults.filter(
              (r) => r.status === "error" || r.status === "failed"
            ).length;
    
            const result: Record<string, unknown> = {
              results: mergedResults,
              summary: {
                total: mergedResults.length,
                complete,
                pending,
                failed,
                pipeline_checks_charged:
                  jobResponse.summary.pipeline_checks_charged ?? 0,
              },
            };
    
            // Flag partial results if timeout was reached
            if (pending > 0) {
              result.message = `Timeout reached after 5 minutes. ${pending} URL(s) still processing. Check results later via check_history.`;
            }
    
            return successResult(result);
          } catch (err) {
            if (err instanceof ApiRequestError) return apiErrorToResult(err);
            return errorResult(err instanceof Error ? err.message : "Unknown error");
          }
        }
      );
    }
  • The handler function for check_urls. Submits batch via API, polls for async results with progress notifications, merges cached/known results with pipeline results, and returns unified response.
      async ({ urls, profile }, extra) => {
        if (!api.hasApiKey) return authError();
    
        try {
          // Step 1: Submit the batch
          const batchResponse = await api.batchCheck(urls, profile);
    
          // Step 2: If no job_id, everything resolved from cache/Tranco — return immediately
          if (!batchResponse.job_id) {
            return successResult(batchResponse);
          }
    
          // Step 3: Poll for async results
          const startTime = Date.now();
          const progressToken = extra?._meta?.progressToken;
          let jobResponse = await api.pollJob(batchResponse.job_id);
    
          while (jobResponse.status !== "completed") {
            // Check timeout before sleeping
            if (Date.now() - startTime > TIMEOUT_MS) {
              break;
            }
    
            await sleep(POLL_INTERVAL_MS);
            jobResponse = await api.pollJob(batchResponse.job_id);
    
            // Send progress notification if the client supports it
            if (progressToken !== undefined) {
              const completed = jobResponse.summary.completed ?? 0;
              const total = jobResponse.summary.total ?? urls.length;
              try {
                await extra.sendNotification({
                  method: "notifications/progress" as const,
                  params: {
                    progressToken,
                    progress: completed,
                    total,
                  },
                });
              } catch {
                // Client may not support progress notifications — that's fine, skip silently
              }
            }
          }
    
          // Step 4: Merge batch response (known/cached) with job response (pipeline results)
          // Build a lookup from the job response for URLs that were processed async
          const jobResultMap = new Map<string, BatchResultItem>();
          for (const item of jobResponse.results) {
            jobResultMap.set(item.url, item);
          }
    
          // Replace pending items in the original batch response with completed results
          const mergedResults = batchResponse.results.map((item) => {
            if (item.status === "pending" && jobResultMap.has(item.url)) {
              return jobResultMap.get(item.url)!;
            }
            return item;
          });
    
          // Step 5: Build unified summary
          const complete = mergedResults.filter(
            (r) => r.status === "complete" || r.status === "completed"
          ).length;
          const pending = mergedResults.filter(
            (r) => r.status === "pending"
          ).length;
          const failed = mergedResults.filter(
            (r) => r.status === "error" || r.status === "failed"
          ).length;
    
          const result: Record<string, unknown> = {
            results: mergedResults,
            summary: {
              total: mergedResults.length,
              complete,
              pending,
              failed,
              pipeline_checks_charged:
                jobResponse.summary.pipeline_checks_charged ?? 0,
            },
          };
    
          // Flag partial results if timeout was reached
          if (pending > 0) {
            result.message = `Timeout reached after 5 minutes. ${pending} URL(s) still processing. Check results later via check_history.`;
          }
    
          return successResult(result);
        } catch (err) {
          if (err instanceof ApiRequestError) return apiErrorToResult(err);
          return errorResult(err instanceof Error ? err.message : "Unknown error");
        }
      }
    );
  • Input schema validation for check_urls using Zod. Defines 'urls' (array of URLs, 1-500) and optional 'profile' (string for custom scoring).
    inputSchema: {
      urls: z
        .array(z.string().url().max(2048))
        .min(1)
        .max(500)
        .describe("List of URLs to check (maximum 500 per call)"),
      profile: z
        .string()
        .optional()
        .describe(
          "Name of a custom scoring profile to use for all URLs (optional)"
        ),
    },
  • Helper utilities used by check_urls handler: successResult, errorResult, authError, apiErrorToResult, and sleep for polling.
    // Shared utilities for MCP tool handlers
    // Provides consistent success/error formatting across all tools
    
    import type { CallToolResult } from "@modelcontextprotocol/sdk/types.js";
    import { ApiRequestError } from "../api.js";
    
    // Wrap any data as a successful MCP tool result
    export function successResult(data: unknown): CallToolResult {
      return {
        content: [{ type: "text", text: JSON.stringify(data, null, 2) }],
      };
    }
    
    // Return a plain error message as an MCP tool error
    export function errorResult(message: string): CallToolResult {
      return {
        content: [{ type: "text", text: JSON.stringify({ error: message }) }],
        isError: true,
      };
    }
    
    // Standard error for tools that require an API key but none is configured
    export function authError(): CallToolResult {
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify({
              error: "auth_required",
              message:
                "API key is missing. Set UNPHURL_API_KEY in your MCP server configuration, or use the signup tool to create an account first.",
            }),
          },
        ],
        isError: true,
      };
    }
    
    // Convert an API error into an MCP tool error
    // Special-cases 402 (insufficient credits) to prompt the agent toward the purchase tool
    export function apiErrorToResult(err: ApiRequestError): CallToolResult {
      const body = err.apiError;
    
      if (err.status === 402) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify(
                {
                  ...body,
                  _hint:
                    "Use the purchase tool to buy more credits, or get_pricing to see available packages.",
                },
                null,
                2
              ),
            },
          ],
          isError: true,
        };
      }
    
      return {
        content: [{ type: "text", text: JSON.stringify(body, null, 2) }],
        isError: true,
      };
    }
    
    // Promise-based sleep for polling loops
    export function sleep(ms: number): Promise<void> {
      return new Promise((resolve) => setTimeout(resolve, ms));
    }
  • API client methods used by check_urls: batchCheck() submits URLs for batch processing, and pollJob() polls async job results.
    async batchCheck(urls: string[], profile?: string): Promise<BatchResponse> {
      const body: Record<string, unknown> = { urls };
      if (profile) body.profile = profile;
      return this.doRequest<BatchResponse>("POST", "/v1/check/batch", body);
    }
    
    async pollJob(jobId: string): Promise<JobResponse> {
      return this.doRequest<JobResponse>(
        "GET",
        `/v1/jobs/${encodeURIComponent(jobId)}`
      );
    }
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description fully covers behavioral traits: async processing, automatic polling, timeout handling with partial results, billing details (free for cached, credits for unknown, rejection if insufficient), and deduplication. No contradictions or omissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is long but well-structured with a clear overview first, then detailed sections. Every sentence adds value, though some redundancy could be trimmed slightly. The front-loading of purpose is effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description is remarkably complete. It covers input constraints, processing behavior, edge cases (timeout, invalid URLs), billing, and follow-up via check_history. Nothing essential is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage, but the description adds significant meaning: explains how URLs are processed, that duplicates are deduplicated and charged once, and that profile parameter scores results with custom weights. This goes beyond the schema's basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks multiple URLs in a single batch. It explicitly distinguishes from the sibling 'check_url' by being a batch version, thus eliminating ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides comprehensive usage guidance: maximum 500 URLs per call, chunking for larger datasets, automatic polling with 5-minute timeout, duplicate handling, and error recovery via check_history. It also suggests when to use this tool vs alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/123Ergo/unphurl-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server