Skip to main content
Glama

Server Details

URL intelligence for AI agents. One URL in, structured security and data quality signals out.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
123Ergo/unphurl-mcp
GitHub Stars
0
Server Listing
unphurl-mcp

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 13 of 13 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct and clearly defined purpose with no overlap. For example, check_url and check_urls are differentiated by single vs. batch processing, while check_history provides historical data. Tools like create_profile, delete_profile, and list_profiles are distinct operations within the scoring profile management workflow.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with clear, descriptive verbs (e.g., check_url, create_profile, get_balance, list_profiles). There are no deviations in style or convention, making the set highly predictable and readable.

Tool Count5/5

With 13 tools, the server is well-scoped for its domain of URL security analysis and account management. The count covers core operations (URL checking, scoring profiles, billing, account setup) without being excessive, and each tool serves a specific, necessary function.

Completeness5/5

The tool surface provides comprehensive coverage for the domain, including URL checking (single and batch), scoring profile management (create, delete, list, view defaults), account operations (signup, verification, stats), and billing (balance, pricing, purchase). There are no obvious gaps, and the tools support full workflows from setup to analysis.

Available Tools

13 tools
check_historyA
Read-onlyIdempotent
Inspect

View recent URL check history. Shows what URLs have been checked, their scores, phishing status, and whether each check was free or used a pipeline credit.

Results are paginated. Use page and limit parameters to navigate. Default is 20 results per page, maximum 100.

History is retained for 90 days. Account-level stats (total credits, balance) never expire.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (default 1)
limitNoResults per page, max 100 (default 20)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a safe, read-only, idempotent operation with a closed world. The description adds valuable behavioral context beyond annotations: it discloses pagination behavior (default 20, max 100 results per page), data retention period (90 days for history), and that account stats never expire. This enhances the agent's understanding of operational limits and data availability.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, starting with the core purpose, followed by pagination details, and ending with retention policies. Each sentence adds essential information without redundancy, and the length is appropriate for the tool's complexity. No wasted words or unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema), rich annotations, and 100% schema coverage, the description is mostly complete. It covers purpose, usage, pagination, and data retention. However, it lacks details on error handling, response format, or authentication needs, which could help the agent better anticipate outcomes. With annotations covering safety, a 4 reflects good but not exhaustive completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the 'page' and 'limit' parameters with defaults and constraints. The description adds minimal semantic value beyond the schema, only reiterating pagination navigation and default/max values without explaining parameter interactions or edge cases. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('View recent URL check history') and resources ('URLs'), listing exactly what information is shown (scores, phishing status, credit usage). It distinguishes from siblings like 'check_url' (single check) and 'check_urls' (batch check) by focusing on historical records rather than performing new checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to view history of past checks) and implies when not to use it (e.g., for new checks, use 'check_url' or 'check_urls'). However, it doesn't explicitly name alternatives or state exclusions, such as not using it for real-time checking or account management tasks handled by other siblings like 'get_balance'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_urlA
Read-onlyIdempotent
Inspect

Check a single URL for security and data quality signals. Returns a risk score (0-100), detailed signal breakdown, and metadata.

Unphurl analyses URLs across seven dimensions: redirect behaviour, brand impersonation, domain intelligence (age, registrar, expiration, status codes, nameservers via RDAP), SSL/TLS validity, parked domain detection, URL structural analysis (length, path depth, subdomain count, entropy), and DNS enrichment (MX records). The score is calculated from these signals using either default weights or a custom scoring profile.

Higher scores mean more suspicious. The score is a signal, not a verdict. You decide the threshold based on the use case.

Billing: Most lookups are free. Known domains (Tranco Top 100K like google.com, github.com) return instantly with score 0 at no cost. Previously analysed domains return cached signals at no cost. Only unknown domains that run through the full analysis pipeline cost 1 pipeline check credit. The response's meta.pipeline_check_charged field tells you whether this check consumed a credit.

Use the "profile" parameter to score results with custom weights. For example, a "cold-email" profile might weight parked domains heavily while ignoring brand impersonation. Use list_profiles to see available profiles, or show_defaults to see all signal weights.

If the account has zero credits and the URL requires a full pipeline check, returns a 402 error with a link to purchase more credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to check (must be http:// or https://)
profileNoName of a custom scoring profile to use (optional). If omitted, default weights are used.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it details the seven analysis dimensions, explains the risk score interpretation (higher scores mean more suspicious), billing details (free vs paid checks), and error handling (402 error). While annotations cover read-only, non-destructive, and idempotent aspects, the description enriches this with operational specifics like caching and credit consumption.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with key information, but it is moderately long due to detailed explanations of analysis dimensions, billing, and usage. Most sentences earn their place by providing essential operational details, though some redundancy exists (e.g., reiterating score interpretation).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple analysis dimensions, billing logic, sibling interactions) and the absence of an output schema, the description is highly complete. It thoroughly explains what the tool does, how to use it, behavioral nuances, and integrates with sibling tools, compensating well for the lack of structured output documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the 'url' and 'profile' parameters. The description adds some semantic context by explaining that the 'profile' parameter uses custom weights and gives examples (e.g., 'cold-email' profile), but this is marginal enhancement over the schema's clear descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check a single URL for security and data quality signals' and specifies it returns a risk score, detailed signal breakdown, and metadata. It distinguishes from sibling tools like 'check_urls' (plural) by focusing on a single URL and from 'check_history' by not involving historical data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it mentions using the 'profile' parameter for custom weights, references 'list_profiles' to see available profiles, and 'show_defaults' to see signal weights. It also explains billing implications and error conditions (402 error for insufficient credits), offering comprehensive usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_urlsA
Read-onlyIdempotent
Inspect

Check multiple URLs in a single batch. Returns results for all URLs, handling async processing automatically.

Each URL is analysed across seven dimensions: redirect behaviour, brand impersonation, domain intelligence (age, registrar, expiration, status codes, nameservers via RDAP), SSL/TLS validity, parked domain detection, URL structural analysis, and DNS enrichment. Known and cached URLs return results immediately. Unknown URLs are queued for pipeline processing. This tool automatically polls for results until all URLs are complete or the 5-minute timeout is reached. You don't need to manage polling or job tracking.

If the timeout is reached before all results are complete, returns whatever is available with a clear message indicating which URLs are still processing. The user can check results later via check_history.

Maximum 500 URLs per call. For larger datasets, call this tool multiple times with chunks of up to 500 URLs.

Billing: Same as check_url. Known and cached domains are free. Only unknown domains running through the full pipeline cost 1 credit each. The summary shows pipeline_checks_charged (the actual number of credits consumed). If you don't have enough credits for the unknowns in the batch, the entire batch is rejected with a 402 error telling you exactly how many credits are needed.

Duplicate URLs in the list are automatically deduplicated (processed once, charged once). Invalid URLs get individual error status without rejecting the batch.

Use the "profile" parameter to score all results with custom weights.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlsYesList of URLs to check (maximum 500 per call)
profileNoName of a custom scoring profile to use for all URLs (optional)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it explains async processing with automatic polling, 5-minute timeout, handling of known/cached vs unknown URLs, billing details (credits for unknown domains), error handling (402 for insufficient credits), duplicate URL deduplication, and invalid URL tolerance. Annotations cover safety (readOnly, non-destructive, idempotent), but the description enriches this with operational specifics not implied by annotations alone.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with key information (batch processing, async handling), but it is moderately long due to detailed explanations of dimensions, billing, and error handling. Most sentences earn their place by providing essential operational context, though some details could be more condensed without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (batch processing with async operations, billing, and multiple analysis dimensions) and lack of output schema, the description is highly complete. It covers purpose, usage, behavioral traits, parameters, error handling, and integration with sibling tools, providing all necessary context for an agent to use the tool correctly without needing additional documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for parameters beyond the schema: it explains that 'urls' supports up to 500 items with automatic deduplication and error handling for invalid URLs, and clarifies that the 'profile' parameter applies custom weights to 'score all results'. The schema has 100% description coverage, so baseline is 3, but the description enhances understanding with practical usage details, warranting a higher score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check multiple URLs in a single batch' with specific analysis across seven dimensions (redirect behaviour, brand impersonation, etc.). It distinguishes from sibling tools like 'check_url' (single URL) and 'check_history' (checking results later). The verb 'check' and resource 'URLs' are explicit with batch processing as a key differentiator.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: 'For larger datasets, call this tool multiple times with chunks of up to 500 URLs' and references 'check_history' for checking results later. It also specifies prerequisites like credit requirements and timeout handling, making it clear when and how to invoke this tool effectively.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_profileA
Idempotent
Inspect

Create or update a custom scoring profile. Profiles are sparse overrides: only specify the weights you want to change. Everything else keeps its default value.

If a profile with this name already exists, it is updated with the new weights (full replacement, not merge).

Weights are points, not percentages. Each weight is the number of points that signal adds to the score when it fires. They don't need to total 100. A profile with weights totalling 90 is conservative (max possible score is 90). A profile with weights totalling 130 is aggressive (multiple signals quickly push to the cap of 100). The threshold the agent sets for action matters more than the weight totals.

Use show_defaults to see all 23 signals with their default weights and descriptions before creating a profile. Use check_url or check_urls with the "profile" parameter to score results with this profile.

Maximum 20 profiles per account. Profile name "default" is reserved.

Common profiles:

  • Cold email: weight parked (30), chain_incomplete (25), ssl_invalid (15) higher. Lower brand_impersonation (10).

  • Security bot: keep brand_impersonation high (40), increase domain_age_7 (30), redirects_5 (25).

  • Lead gen: weight parked (35), http_only (20), chain_incomplete (20) for dead business detection.

  • SEO audit: weight redirects_5 (30), chain_incomplete (30), parked (25) for link quality.

See the Unphurl API documentation for all 19 use case weight examples.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesProfile name (lowercase alphanumeric and hyphens only, 1-50 chars, e.g. 'cold-email', 'security-bot')
weightsYesCustom weights for scoring signals. Only include signals you want to override. Available signals: brand_impersonation (default 40), domain_age_3 (35), domain_age_7 (25), domain_age_30 (15), domain_age_90 (5), ssl_invalid (10), http_only (5), redirects_3 (10), redirects_5 (25), chain_incomplete (15), parked (10), compound (10), phishing_floor (80), url_long (3), path_deep (3), subdomain_excessive (5), domain_entropy_high (5), url_contains_ip (10), encoded_hostname (5), tld_redirect_change (5), expiring_soon (10), domain_status_bad (15), no_mx_record (5).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains that profiles are sparse overrides, updates are full replacements (not merges), maximum 20 profiles per account, and the 'default' profile name is reserved. While annotations cover idempotency and non-destructive aspects, the description enriches this with practical constraints and operational details, though it doesn't mention rate limits or auth needs explicitly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with key information, but it includes extensive examples and API references that, while helpful, could be trimmed for brevity. Most sentences earn their place by clarifying usage, but the last paragraph with API documentation reference might be slightly excessive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with idempotent behavior, nested objects, no output schema), the description is highly complete. It covers purpose, usage guidelines, behavioral traits, parameter context, and integration with sibling tools, providing sufficient information for an agent to invoke it correctly without needing output schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds context about weights being points not percentages, weight totals affecting conservativeness/aggressiveness, and examples of common profiles, but doesn't provide additional syntax or format details beyond what the schema already documents for 'name' and 'weights' parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Create or update a custom scoring profile' with specific details about sparse overrides and full replacement behavior. It distinguishes from siblings like 'show_defaults' (for viewing defaults) and 'delete_profile' (for removal), making the verb+resource distinction explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use show_defaults to see all 23 signals with their default weights and descriptions before creating a profile' and 'Use check_url or check_urls with the "profile" parameter to score results with this profile.' It also mentions alternatives like common profile examples and references to API documentation for further use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_profileA
DestructiveIdempotent
Inspect

Delete a custom scoring profile. This is permanent. Any future check requests using this profile name will fall back to default weights.

Use list_profiles to see your current profiles before deleting.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesName of the profile to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explicitly states 'This is permanent' and explains the consequence that 'Any future check requests using this profile name will fall back to default weights.' While annotations cover destructiveHint and idempotentHint, the description provides practical implications not captured in structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and key behavioral details in the first sentence, followed by a practical usage tip. Both sentences are essential—the first explains the action and consequences, and the second provides actionable guidance—with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (destructive operation with permanent consequences), the description is mostly complete: it covers purpose, behavioral traits, and usage guidance. However, without an output schema, it does not describe return values or error conditions, leaving a minor gap in full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single parameter 'name.' The description does not add any additional meaning or clarification about the parameter beyond what the schema provides, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Delete') and resource ('a custom scoring profile'), distinguishing it from siblings like 'create_profile' and 'list_profiles'. It goes beyond the tool name by specifying the nature of the resource being deleted.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by recommending 'Use list_profiles to see your current profiles before deleting,' which implies it should be used after verification. However, it does not explicitly state when not to use it or name alternatives like 'create_profile' for comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_balanceA
Read-onlyIdempotent
Inspect

Check your pipeline check credit balance. Shows credits remaining, total purchased, total used, and lifetime free lookups count.

Credits are consumed only when unknown domains run through the full analysis pipeline. Known domains (Tranco Top 100K) and cached domains (previously analysed by any Unphurl customer) are always free.

If credits_remaining is 0, you can still check known and cached domains for free. To check unknown domains, purchase more credits using the "purchase" tool.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and idempotent operations, the description explains credit consumption rules, free lookup conditions for known/cached domains, and what happens when credits reach zero. This provides practical usage context that annotations alone don't convey.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three focused paragraphs: purpose statement, credit consumption rules, and usage guidance. Every sentence adds value without redundancy. The information is front-loaded with the core purpose stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool with comprehensive annotations, the description provides complete context. It explains what the tool does, when to use it, how the credit system works, and relationships with other tools. The absence of an output schema is compensated by detailed descriptions of what information will be returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema coverage, the baseline would be 4. The description appropriately doesn't discuss parameters since none exist, but it does provide important context about the tool's operation and credit system that helps the agent understand what to expect from the output.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'Check your pipeline check credit balance' with specific details about what information it shows (credits remaining, total purchased, total used, lifetime free lookups count). It clearly distinguishes this from sibling tools like 'purchase' and 'get_pricing' by focusing on balance checking rather than transactional or informational tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: 'If credits_remaining is 0, you can still check known and cached domains for free. To check unknown domains, purchase more credits using the "purchase" tool.' It also clarifies the relationship with other tools like 'check_url' by explaining credit consumption rules for different domain types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pricingA
Read-onlyIdempotent
Inspect

Show available pipeline check credit packages and pricing. Returns all packages with credit counts and prices.

Packages (one-time purchase, no subscription):

  • Starter: 100 credits for $9 ($0.09 each)

  • Standard: 500 credits for $39 ($0.078 each)

  • Pro: 2,000 credits for $99 ($0.0495 each)

  • Scale: 10,000 credits for $399 ($0.0399 each)

Most URL lookups are free (known domains and cached domains). Credits are only consumed when an unknown domain runs through the full analysis pipeline. In typical use, 95-99% of URLs resolve free.

This tool does not require an API key.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent operation. The description adds valuable context beyond this: it specifies that 'This tool does not require an API key,' which is crucial for authentication needs, and explains credit consumption patterns ('Most URL lookups are free...'), helping users understand typical usage. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded: the first sentence states the core purpose, followed by detailed package listings and additional context about free lookups and API requirements. Every sentence adds value—none are redundant or wasteful—making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is complete. It explains what the tool does, provides pricing details, clarifies credit usage, and notes authentication requirements, covering all necessary aspects for an agent to understand and invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately focuses on output semantics, detailing the package structures and pricing, which compensates for the lack of an output schema. This exceeds the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Show available pipeline check credit packages and pricing' and 'Returns all packages with credit counts and prices.' It uses specific verbs ('show', 'returns') and identifies the resource ('credit packages and pricing'), distinguishing it from sibling tools like 'purchase' or 'get_balance' which handle different aspects of the credit system.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: to view pricing information before making a purchase. It implicitly distinguishes it from 'purchase' (which handles buying) and 'get_balance' (which checks existing credits), but does not explicitly state when not to use it or name alternatives, keeping it at a 4.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_statsA
Read-onlyIdempotent
Inspect

View your account usage statistics. Shows total URLs submitted, breakdown by gate (Tranco lookups, cache lookups, pipeline checks), free rate percentage, score threshold counts, and credit balance.

Use this to understand your usage patterns: how many of your checks resolved free (known or cached domains) vs paid pipeline checks, and how many URLs scored above key thresholds.

This is useful for:

  • Checking if your scoring profile is flagging the right proportion of URLs

  • Understanding your cost efficiency (higher free rate = more value per credit)

  • Reporting usage metrics

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare this as read-only, non-destructive, and idempotent. The description adds valuable behavioral context beyond annotations by explaining what kind of statistics are returned (breakdown by gate types, free rate, thresholds) and the practical implications of the data. It doesn't mention rate limits or authentication requirements, but with comprehensive annotations, this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear opening statement followed by bullet points of use cases. Every sentence adds value by explaining what the tool shows and why it's useful. It could be slightly more concise by integrating the 'This is useful for' section more tightly, but overall it's efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there are no parameters, annotations cover safety aspects thoroughly, and the tool is a read-only statistics viewer, the description provides good contextual completeness. It explains what data is returned and why it matters. The main gap is the lack of an output schema, but the description compensates by detailing the statistics included.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are 0 parameters, and schema description coverage is 100% (empty schema). The description appropriately doesn't discuss parameters since none exist. It focuses instead on what the tool returns, which is helpful for understanding output semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'View your account usage statistics' with specific details about what it shows (total URLs submitted, breakdown by gate, free rate percentage, score threshold counts, credit balance). It distinguishes from siblings like get_balance (which might only show balance) and check_history (which shows individual checks rather than aggregated statistics).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to understand your usage patterns' and for specific use cases like checking scoring profiles, cost efficiency, and reporting). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the siblings (e.g., get_balance for just credit balance).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_profilesA
Read-onlyIdempotent
Inspect

List all custom scoring profiles on this account. Returns profile names and their custom weight overrides.

Profiles are named weight sets that change how Unphurl scores URLs. Different use cases need different scoring. A cold email agent cares about dead domains. A security bot cares about phishing. Profiles let one account serve multiple use cases.

Profiles only override specific weights. Any signal not specified in a profile uses the default weight. Use show_defaults to see all 23 signals and their default weights.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), but the description adds valuable context about what profiles are and how they work (e.g., 'Profiles only override specific weights. Any signal not specified in a profile uses the default weight.'). This clarifies the tool's behavior beyond the annotations without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by explanatory context. Every sentence adds value, such as explaining profiles and linking to 'show_defaults', though it could be slightly more concise by integrating some explanatory sentences more tightly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is complete. It explains what the tool does, how profiles work, and references related tools, providing all necessary context for an agent to use it effectively without over-explaining.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately focuses on the tool's purpose and output semantics, earning a high baseline score for not introducing unnecessary parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List all custom scoring profiles on this account' with specific details about what it returns ('profile names and their custom weight overrides'). It distinguishes itself from sibling tools like 'show_defaults' by focusing on custom profiles rather than default settings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance by stating when to use this tool versus alternatives: 'Use show_defaults to see all 23 signals and their default weights.' It also implies context by explaining that profiles are for different use cases (e.g., cold email vs. security), helping the agent decide when this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

purchaseAInspect

Purchase pipeline check credits. Returns a Stripe Checkout URL that the user must open in a browser to complete payment.

The AI cannot complete the payment. Tell the user to open the URL in their browser, complete the Stripe checkout, and then confirm they've paid. Credits are added to the account automatically once Stripe confirms payment.

After purchase, use get_balance to verify credits have been added.

ParametersJSON Schema
NameRequiredDescriptionDefault
packageYesPackage to purchase: pkg_100 ($9, 100 credits), pkg_500 ($39, 500 credits), pkg_2000 ($99, 2000 credits), pkg_10000 ($399, 10000 credits)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-readOnly, non-destructive operation, which aligns with the description's focus on initiating a purchase. The description adds valuable context beyond annotations: it clarifies that the AI cannot complete the payment, specifies the user must open a URL in a browser, and notes credits are added automatically after Stripe confirmation. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and return value, followed by essential usage instructions and next steps. Every sentence earns its place by providing critical information without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (financial transaction with user interaction) and lack of output schema, the description is complete: it explains what the tool does, how to use it, limitations (AI cannot pay), user actions required, and verification steps. It compensates adequately for the missing output schema by detailing the return value and post-purchase workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, detailing the package parameter with enum values and cost/credit mappings. The description does not add any parameter-specific information beyond the schema, so it meets the baseline score of 3 for high schema coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Purchase pipeline check credits') and resource ('credits'), distinguishing it from siblings like get_balance (check balance) or get_pricing (view pricing). It explicitly mentions the return value ('Stripe Checkout URL'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (to buy credits) and when not (the AI cannot complete payment). It names an alternative tool for verification ('use get_balance to verify credits') and outlines the user's required actions after invocation, clearly differentiating it from other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resend_verificationA
Idempotent
Inspect

Resend the email verification link for an existing Unphurl account.

Use this when a user signed up but their verification link expired (links are valid for 24 hours) and they need a new one. The user's API key won't work until their email is verified.

For security, the response is always the same regardless of whether the email exists, is already verified, or was rate limited. This prevents account enumeration.

Rate limited to 3 requests per email per hour.

This tool does not require an API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesEmail address of the account that needs verification
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations. Annotations indicate it's not read-only, is open-world, idempotent, and non-destructive. The description elaborates with security details ('response is always the same regardless of whether the email exists, is already verified, or was rate limited'), rate limits ('3 requests per email per hour'), and authentication requirements ('does not require an API key'), which are not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage guidelines and behavioral details. Each sentence adds value: the first states the action, the second specifies when to use it, the third explains security behavior, the fourth covers rate limits, and the fifth notes authentication. There is no redundant or wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involving security, rate limiting, and authentication), the description provides comprehensive context. It covers purpose, usage, behavioral traits, and constraints, compensating for the lack of an output schema. The annotations support this with hints about idempotency and open-world behavior, making the description complete for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'email' parameter fully documented. The description does not add any additional semantic information about the parameter beyond what the schema provides (e.g., format or constraints). However, it implies the email must belong to an existing account, which is useful context not in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('resend the email verification link') and the target resource ('an existing Unphurl account'). It distinguishes itself from sibling tools like 'signup' (for new accounts) and 'create_profile' (for profile management) by focusing on verification resending for existing accounts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'when a user signed up but their verification link expired (links are valid for 24 hours) and they need a new one.' It also provides a clear exclusion: 'The user's API key won't work until their email is verified,' indicating this is a prerequisite for API functionality. No alternatives are mentioned, but the context is sufficiently detailed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

show_defaultsA
Read-onlyIdempotent
Inspect

Show all 23 scoring signals with their default weights and descriptions. This is the baseline scoring that applies when no custom profile is specified.

Use this to understand what each signal means and how much it contributes to the score before creating custom profiles. Profiles are sparse overrides on top of these defaults.

This tool does not require an API key. The defaults are hardcoded and always available.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies that defaults are 'hardcoded and always available' and 'does not require an API key,' which are operational details not covered by the readOnly/idempotent annotations. However, it doesn't describe the return format or structure of the 23 signals, leaving some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: the first states the purpose, the second provides usage guidelines, and the third adds operational context. Each sentence earns its place by delivering essential information without redundancy, making it front-loaded and zero-waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema) and rich annotations (readOnly, idempotent), the description is mostly complete: it covers purpose, usage, and key behavioral traits. However, it lacks details on the output format (e.g., structure of the 23 signals), which could help an agent interpret results, slightly reducing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately notes 'This tool does not require an API key,' which implicitly addresses the lack of parameters by explaining why none are needed, adding useful context about authentication requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Show all 23 scoring signals') and resource ('with their default weights and descriptions'), distinguishing it from siblings like 'create_profile' or 'list_profiles' which manage custom profiles rather than displaying defaults. It explicitly defines the tool's scope as providing baseline scoring information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('to understand what each signal means and how much it contributes to the score before creating custom profiles') and provides context about alternatives ('Profiles are sparse overrides on top of these defaults'), clearly differentiating it from profile management tools. It also notes this is for baseline information when no custom profile is specified.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

signupAInspect

Create a new Unphurl account. Returns an API key (shown once, store it securely).

After signup, the user must check their email and click the verification link. The API key won't work for URL checks until the email is verified. Verification link expires after 24 hours. If the link expires, use the "resend_verification" tool to request a new one.

The account starts with 20 free pipeline check credits so the user can test with real URLs. Known domain lookups (google.com, github.com, etc.) and cached domain lookups are always free. To check more unknown domains through the full analysis pipeline, the user can purchase credits via the "purchase" tool.

Once the user has their API key, they need to add it to their MCP server configuration as UNPHURL_API_KEY.

This tool does not require an API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesEmail address for the account
companyNoCompany name (optional)
first_nameYesFirst name (used for personalized emails)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it explains that the API key is shown once and must be stored securely, details email verification requirements and expiration (24 hours), describes initial free credits (20 pipeline checks) and free cached lookups, and notes that the API key won't work until verification. Annotations cover basic hints (e.g., not read-only, not destructive), but the description enriches this with practical constraints and post-signup steps, though it doesn't mention rate limits or error behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by important details like verification, credits, and configuration. Most sentences earn their place by providing critical information (e.g., API key handling, verification steps). However, it could be slightly more concise by integrating some details (e.g., the note about cached lookups feels slightly tangential).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (account creation with verification and credit systems), the description is highly complete. It covers prerequisites (no API key needed), post-actions (email verification, API key configuration), usage limits (free credits), and integration with other tools (resend_verification, purchase). With no output schema, it adequately explains the return value (API key) and next steps, leaving no significant gaps for an agent to operate effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters (email, company, first_name) adequately. The description adds no additional parameter-specific information beyond what's in the schema (e.g., it doesn't clarify format details or constraints for 'first_name' or 'company'). Thus, it meets the baseline of 3 by not detracting from the schema's documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a new Unphurl account') and resource ('account'), distinguishing it from siblings like 'create_profile' (which appears to manage user profiles rather than account creation) and 'resend_verification' (which handles post-signup verification). The verb 'Create' is precise and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (initial account creation) versus alternatives: it directs users to 'resend_verification' if verification links expire and mentions 'purchase' for buying credits. It also clarifies that this tool does not require an API key, unlike other tools that likely do. This comprehensive coverage includes both when-to-use and when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.