Skip to main content
Glama

Server Details

Direct access to 40+ scraping and search tools. Extract structured data from Google (Search, Maps, Trends), Amazon, Airbnb, Social Media, and any web page directly into your AI agent.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 42 of 42 tools scored.

Server CoherenceA
Disambiguation5/5

Every tool targets a distinct combination of platform and data type (e.g., Amazon product vs. reviews, Google SERP vs. Shopping vs. Flights). Descriptions are detailed and clearly differentiate similar operations across platforms. No two tools overlap in purpose.

Naming Consistency5/5

All tool names follow the strict pattern 'hasdata_<platform>_<category>_<action>', using only lowercase and underscores. The action verb (get, search, perform, scrape) is always present and positioned consistently. Zero deviation from the convention.

Tool Count4/5

42 tools is on the higher end, but each covers a specific data need from a distinct source (Airbnb, Amazon, Google, etc.). The count is justified by the broad domain coverage and the lack of redundant tools. Could be considered slightly heavy but still well-scoped.

Completeness5/5

The toolset provides a full read-only data retrieval surface for each platform: search/listings endpoints paired with detail endpoints, plus additional specialized endpoints (reviews, seller info, photos, etc.). Missing write operations are expected for this scraping-centric server, and no obvious gaps exist for common data retrieval workflows.

Available Tools

42 tools
hasdata_airbnb_listing_getAirbnbListingsairbnb_listing: GET /AInspect

Get Airbnb Listings

Searches Airbnb for available stays by location and date range (check-in/check-out) with guest composition (adults, children, infants, pets) and nextPageToken pagination. Returns listing id/url, title, price per night and total, rating, review count, coordinates, photos, host info, room/bed/bath counts, and amenities summary. Use for travel planning agents, short-term rental market analysis, pricing/occupancy research, and lead lists for property management.

ParametersJSON Schema
NameRequiredDescriptionDefault
petsNoNumber of pets.
adultsNoNumber of adults.
checkInYesThe check-in date for the listings.
infantsNoNumber of infants.
checkOutNoThe check-out date for the listings.
childrenNoNumber of children.
locationYesThe location to search for listings.
nextPageTokenNoThe token used to retrieve the next page of results.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the behavioral burden. It mentions pagination via nextPageToken and output fields, but lacks details on rate limits, data freshness, or any authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: first states the main function, second lists results and use cases. No wasted words, well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description reasonably covers inputs and outputs. Could elaborate on pagination details (e.g., how to obtain nextPageToken), but overall sufficient for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all 8 parameters. The description adds context by grouping parameters (e.g., guest composition) and listing output fields, but does not significantly expand on parameter meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'searches Airbnb for available stays by location and date range' and lists detailed return fields. It distinguishes itself from siblings like 'getAirbnbPropertyDetails' by focusing on listings search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases such as travel planning, market analysis, and pricing research. However, it does not mention when not to use or compare with the property details tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_airbnb_property_getAirbnbPropertyDetailsairbnb_property: GET /AInspect

Get Airbnb Property Details

Fetches the full Airbnb property page by listing URL. Returns title, description, location, coordinates, price breakdown, cleaning/service fees, rating and review distribution, host profile, room/bed/bath counts, photos, amenities list, house rules, cancellation policy, and availability calendar hints. Use for travel-planning agents, deep-dive rate research, photo/amenity enrichment of listings discovered via the listing-search endpoint, and LSTR compliance or market-quality audits.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the Airbnb listing. Must be a valid Airbnb listing URL.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It describes output but does not disclose side effects, authorization needs, rate limits, or whether it's read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with a one-line summary, output list, and use cases. Slightly verbose but front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers purpose, output, and usage context. Missing behavioral info, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description only reiterates that the URL must be valid. Adds minimal value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches full Airbnb property details by listing URL, listing specific fields returned. It distinguishes from sibling tools by referencing the listing-search endpoint for discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists use cases (travel planning, deep-dive research, enrichment, compliance audits) and mentions the listing-search endpoint as alternative. Could be more explicit about when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_amazon_product_getProductDetailsamazon_product: GET /AInspect

Get Amazon Product Details

Fetches a single Amazon product page by ASIN on a chosen Amazon domain (amazon.com, .co.uk, .de, .jp, etc.). Returns title, brand, current/list/deal price, currency, availability, Buy Box seller, Prime eligibility, bullet points, A+ description, rating and review count, images, category breadcrumbs, variants/sibling ASINs, and the other-sellers offers block (when otherSellers=true), plus delivery-zone-aware pricing when a shipping zip/location is set. Use for product research agents, price/stock monitoring, catalog enrichment, listing QA, Buy Box tracking, and cross-locale competitive analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
asinYesThe Amazon Standard Identification Number (ASIN) of the product.
domainNoAmazon domain to use. Default is www.amazon.com.
languageNoOptional Amazon language code. Supported values depend on the selected domain.
deliveryZipNoPostal code of the delivery location.
otherSellersNoIf set to true, extracts the other sellers block from the product page.
shippingLocationNoThe two-letter country code to define the country of the delivery address.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral aspects. It describes the tool's actions (fetching, returning fields) and conditional behavior for otherSellers and delivery pricing. However, it lacks details on error handling, rate limits, authentication needs, or response structure when outputs are missing, leaving gaps for a complete behavioral picture.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the tool's name and a concise action statement. It then lists returned fields and use cases efficiently. While slightly lengthy as a paragraph, each sentence serves a purpose; bullet points could improve scannability but are not required.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description adequately covers key return fields (title, brand, price, etc.) and optional conditional outputs (other sellers, delivery-zone pricing). It explains input parameters and their effects, but the absence of exact response structure or field presence guarantees leaves minor gaps for an agent seeking complete clarity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds meaningful context beyond schema descriptions. It explains the role of otherSellers (triggers sellers block), deliveryZip and shippingLocation (affect pricing), and domain/language (cross-locale support). This enhances understanding beyond raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: fetching product details by ASIN from a chosen Amazon domain. It lists specific fields returned (title, brand, price, etc.) and use cases (product research, monitoring, etc.), distinguishing it from sibling tools like search or reviews which serve different functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool (e.g., product research, price monitoring, catalog enrichment). It implies when to use optional parameters like otherSellers=true, but does not explicitly mention when not to use or directly name alternatives among siblings, though sibling names suggest distinct purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_amazon_reviews_getProductReviewsamazon_reviews: GET /AInspect

Get Amazon Product Reviews

Paginated fetch of customer reviews for an Amazon ASIN with filters for star rating (1-5, positive, critical), reviewer type (all vs verified purchase), media-only reviews, current-variant vs all-formats, keyword search, and sort (helpful/recent). Returns per-review title, body, star rating, author name and profile, review date, country, verified-purchase flag, helpful-vote count, variant/format attributes, and attached media URLs, plus aggregate rating histogram. Use for voice-of-customer analysis, sentiment and theme extraction, feature-request mining, competitor review benchmarking, and feeding review-summarization or Q&A agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
asinYesThe Amazon Standard Identification Number (ASIN) of the product.
pageNoThe page number to retrieve.
starsNoThe star ratings to filter reviews.
domainNoAmazon domain to use. Default is www.amazon.com.
formatNoThe format type to filter reviews. Include reviews of any product format/variant or specifically to the current format/variant.
sortByNoThe criterion to sort reviews.
languageNoOptional Amazon language code. Supported values depend on the selected domain.
mediaTypeNoThe media type to filter reviews.
searchTermNoA term to search within the reviews.
reviewerTypeNoThe type of reviewers to filter.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'Paginated fetch' and details the output fields, but does not disclose rate limits, authentication requirements, or potential failures (e.g., invalid ASIN). It implies read-only behavior but does not state it. The description adds some transparency but lacks completeness on non-functional aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph but is well-organized: purpose first, then filters, then output, then use cases. It is informative without redundancy. It could be slightly more structured (e.g., bullet points) but remains concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 10 parameters, no output schema, and no annotations, the description provides a comprehensive overview. It explains the paginated nature, all filter dimensions, the return fields (including title, body, star rating, etc.), and includes practical use cases. The agent has sufficient context to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 10 parameters have descriptions in the input schema (100% coverage), so the schema already provides baseline meaning. The description adds value by grouping filters thematically and listing examples (e.g., 'star rating (1-5, positive, critical)') and the output fields. This enriches the semantic understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with a clear verb+resource: 'Get Amazon Product Reviews'. It distinguishes from sibling tools like getProductDetails or getSearchResults by focusing on reviews with pagination and filters. The purpose is unambiguous and specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists explicit use cases (voice-of-customer analysis, sentiment extraction, etc.) and the various filter parameters imply when to use each. However, it does not explicitly state when not to use the tool or compare to alternatives, though the sibling list suggests other Amazon-specific tools exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_amazon_search_getSearchResultsamazon_search: GET /AInspect

Get Amazon Search Results

Runs a keyword search on a chosen Amazon domain with pagination, delivery zip/location scoping, and sort order (featured, price low-to-high, price high-to-low, avg-customer-review, newest). Returns the organic results list with ASIN, title, thumbnail, product URL, price and list price, currency, star rating, review count, Prime/sponsored flags, and position, plus related search suggestions and filter facets. Use for SERP monitoring, keyword/share-of-shelf tracking, competitor discovery, ASIN harvesting to feed downstream product/reviews endpoints, and building product-research or price-comparison agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesThe search term for which to get the search results.
pageNoPage number for pagination (e.g., 1 for the first page, 2 for the second page, etc.).
domainNoAmazon domain to use. Default is www.amazon.com.
sortByNoParameter used for sorting results
languageNoOptional Amazon language code. Supported values depend on the selected domain.
deliveryZipNoPostal code of the delivery location.
shippingLocationNoThe two-letter country code to define the country of the delivery address.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations available, so the description carries the full burden. It explains the return values (organic results, related suggestions, filter facets) and parameters (pagination, domain, zip, sort). However, it does not explicitly state that the operation is read-only or disclose any side effects, rate limits, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense paragraph that front-loads the primary action and then lists return details and use cases. It is not overly long and conveys all necessary information without redundancy. Slight improvement could be breaking into bullet points for readability, but it remains concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (7 parameters, no output schema), the description covers the tool's purpose, parameters, return fields, and use cases thoroughly. Minor gaps exist: it does not explain pagination mechanics (e.g., page numbering limits) or any rate limits. Still, it is above average in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is high. The description adds context by grouping parameters (e.g., 'delivery zip/location scoping') and summarizing acceptable sort orders, but does not provide new details beyond what the schema already specifies. It also lists return fields not in the schema, which adds value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it runs a keyword search on Amazon and returns organic search results with detailed fields (ASIN, title, price, etc.). This distinguishes it from sibling tools like getProductDetails (which retrieves a single product page) or getProductReviews.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly lists use cases: SERP monitoring, keyword tracking, competitor discovery, ASIN harvesting, and building research agents. It implies when to use (search queries) but does not explicitly state when not to use or suggest alternatives, though the use cases are detailed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_amazon_seller_getSellerDetailsamazon_seller: GET /AInspect

Get Amazon Seller Details

Fetches the public storefront profile for an Amazon seller by sellerId on the chosen domain/language. Returns business name, seller logo, About-this-seller text, overall feedback rating and lifetime/12-month/90-day/30-day rating breakdown, feedback count, business address and contact details, customer service info, and any listed policies. Use for seller due-diligence and vetting, counterfeit/brand-protection workflows, MAP-violation investigations, building seller leaderboards, and enriching marketplace seller directories.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainNoAmazon domain to use. Default is www.amazon.com.
languageNoOptional Amazon language code. Supported values depend on the selected domain.
sellerIdYesThe unique Amazon seller ID.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It accurately describes the returned data but does not mention potential issues like rate limits, authentication, or error handling. It implies read-only public access but lacks explicit behavioral traits beyond data content.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the main action and return fields, then lists use cases. It is somewhat verbose but well-structured, with no wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description compensates by listing return fields explicitly. However, it lacks details on error scenarios, request format, or edge cases. For a simple GET tool, it is fairly complete but could be more thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description mentions parameters by context (sellerId, domain, language) but adds no additional semantic detail beyond what the input schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fetches the public storefront profile for an Amazon seller by sellerId, specifying the resource and action. It lists returned fields and explicitly distinguishes from sibling tools like hasdata_amazon_seller_products_getSellerProducts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear use cases (seller due-diligence, vetting, counterfeit workflows) but does not explicitly mention when not to use or name alternative tools. This is good guidance but lacks exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_amazon_seller_products_getSellerProductsamazon_seller_products: GET /AInspect

Get Amazon Seller Products

Paginated listing of the storefront catalog offered by a given Amazon sellerId on the chosen domain. Returns each product row with ASIN, title, image, product URL, price and list price, currency, star rating, review count, and Prime flag. Use to map a competitor's or 3P seller's full assortment, detect new SKU launches, build brand-protection watchlists, drive price-intelligence pipelines, and seed per-ASIN deep-dives against the product and reviews endpoints.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (e.g., 1 for the first page, 2 for the second page, etc.).
domainNoAmazon domain to use. Default is www.amazon.com.
languageNoOptional Amazon language code. Supported values depend on the selected domain.
sellerIdYesThe unique Amazon seller ID.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It indicates this is a read operation (GET) and explains pagination behavior and return fields. However, it does not mention rate limits, authentication requirements, or error handling, which would be helpful for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at around 100 words and front-loaded with the purpose. It covers key elements without unnecessary fluff. It could be slightly more structured (e.g., bullet points), but it is effective as a paragraph.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains the return values (ASIN, title, price, etc.) and pagination. It is complete for typical usage. It lacks details on error states or edge cases, but the complexity is moderate and the coverage is satisfactory.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description does not add significant parameter-level details beyond the schema, though it contextualizes the sellerId and domain. The description of return values is useful but does not directly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get Amazon Seller Products' and elaborates that it returns a paginated listing of a seller's catalog with specific fields. It distinguishes from sibling tools by mentioning competitor analysis and per-ASIN deep-dives, which are not covered by other Amazon tools like getProductDetails or getSearchResults.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists concrete use cases: mapping competitor assortment, detecting new SKUs, brand protection, price intelligence. It also suggests follow-up actions (per-ASIN deep-dives). However, it does not explicitly state when not to use this tool or mention alternatives, so there is room for improvement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_bing_serp_getSearchResultsbing_serp: GET /AInspect

Get Bing Search Results

Fetches Bing SERPs for a query with geo targeting (location/lat/lon), market (mkt), country (cc), safesearch (off/moderate/strict), time/custom filters, device type, and pagination (first offset, count up to 50). Returns organic results (title, url, snippet, displayed url, position), related searches, answer boxes/knowledge panels, and pagination metadata. Use for SEO rank tracking, SERP feature monitoring, Bing-specific visibility audits, and training/eval data for search agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSpecify the search term for which you want to scrape the SERP.
ccNoThe two-letter country code for the country to search from.
latNoGPS latitude for the search origin.
lonNoGPS longitude for the search origin.
mktNoThe two-letter country code for the country to search from.
safeNoAdult Content Filtering option.
countNoNumber of results per page, ranging from 1 to 50.
firstNoThis parameter specifies the number of search results to skip and is used for implementing pagination. For example, a value of 1 (default) indicates the first page of results, 11 refers to the second page, and 21 to the third page.
filtersNoAllows applying various filters to narrow search results, including date-based options: - `ex1:"ez1"` – past 24 hours - `ex1:"ez2"` – past week - `ex1:"ez3"` – past month For complex filters, run a Bing search and copy the filters parameter from the URL.
locationNoDefines the search’s origin location. For realistic results, set location at the city level. If omitted, the proxy’s location may be used.
deviceTypeNoSpecify the device type for the search.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses return structure (organic results, answer boxes, pagination) and key parameters (geo, filters, device). With no annotations, it adequately conveys read-only behavior and capabilities, though doesn't mention rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two-sentence paragraph, front-loaded with action, efficiently covers purpose, parameters, returns, and use cases. No redundant or missing information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema or annotations, the description compensates by listing return fields and pagination behavior. It covers major use cases but omits potential error scenarios; still sufficient for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 11 params are described in schema (100% coverage). The description rephrases them in context but adds limited new semantic detail; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fetches Bing SERPs, lists specific result types (organic, answer boxes, related searches), and outlines use cases (SEO rank tracking, SERP monitoring). It is distinctly a Bing tool among Google siblings, making purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (SEO, Bing-specific audits), but lacks explicit when-not-to-use or direct comparison to Google SERP tools. The mention of 'Bing-specific' indirectly guides selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_glassdoor_job_getJobDetailsglassdoor_job: GET /AInspect

Get GlassDoor Job Details

Fetches a Glassdoor job posting by its vacancy URL. Returns job title, company name and rating, location, salary estimate, employment type, posted date, full job description, qualifications/benefits, and apply link. Use for ATS ingestion, job aggregators, comp benchmarking, enrichment of company profiles, and feeding descriptions into LLM matching or resume-tailoring pipelines.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the job vacancy to retrieve details for.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations; description discloses basic data fetched but not behavioral traits like error handling or rate limits. Adequate for a simple read tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient paragraphs, front-loaded with purpose and data fields. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers inputs and outputs adequately for a tool with one parameter and no output schema. Lists returned fields to guide use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers the single parameter fully; description does not add extra meaning beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool fetches a specific Glassdoor job posting by URL and enumerates returned fields. Distinguishes from sibling listing tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides use cases but does not explicitly state when not to use or compare to alternatives like the listing tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_glassdoor_listing_getJobListingsglassdoor_listing: GET /AInspect

Get GlassDoor Job Listings

Searches Glassdoor job listings by keyword and location with sort (recent/relevant), domain targeting, and nextPageToken pagination. Returns an array of jobs with title, company, location, salary estimate, posted date, job URL, and jobId, plus the next page token. Use to build job feeds, monitor hiring trends for roles/companies/regions, power candidate sourcing tools, and collect URLs for downstream full-detail scraping via the Glassdoor Job endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoThe sorting option for the search results.
domainNoThe domain of the Glassdoor site (optional).
keywordYesThe keyword used to search for job listings.
locationYesThe location to search for job listings.
nextPageTokenNoToken for fetching the next page of jobs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the output structure (array of jobs with fields) and pagination via nextPageToken, but does not mention rate limits or auth requirements. For a read-only listing tool, this is sufficient but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that front-loads the purpose and is reasonably concise. However, it could be slightly more terse without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no output schema), the description adequately explains the output fields and pagination, providing sufficient context for an AI agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents all parameters. The description adds value by explaining the purpose of parameters in context (e.g., sort options), but does not provide additional detail beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get GlassDoor Job Listings' and explains it searches by keyword and location, distinguishing it from sibling tools like hasdata_glassdoor_job_getJobDetails and other listing tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says when to use it: 'to build job feeds, monitor hiring trends, power candidate sourcing tools, and collect URLs for downstream full-detail scraping via the Glassdoor Job endpoint.' This provides clear guidance on its purpose and alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_images_images_getImageSearchResultsgoogle_images_images: GET /AInspect

Get Image Search Results

Scrapes Google Images for a query with advanced filters (size, color, image type, safesearch, domain/country/language, device type) plus page-based pagination (ijn). Returns each image with title, source page URL, direct image URL, thumbnail, dimensions, source domain, and position. Use for visual-asset discovery, reverse-image workflows, dataset collection for ML/CV training, brand/logo monitoring, stock-image sourcing, and grounding multimodal LLMs with fresh image context.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query term for retrieving image results.
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
ijnNoPage number for paginated results, where 0 is the first page.
tbsNo`tbs` parameter for the Google Images API customizes image search results with various filters that can be combined using commas. Here are the available options: Image Size Filters: - `isz:l` - Search for large images. - `isz:m` - Search for medium images. - `isz:i` - Search for icon-sized images. - `isz:lt,islt:qsvga` - Filter for images larger than 400×300. - `isz:lt,islt:vga` - Filter for images larger than 640×480. - `isz:lt,islt:svga` - Filter for images larger than 800×600. - `isz:lt,islt:xga` - Filter for images larger than 1024×768. - `isz:lt,islt:2mp` - Filter for images larger than 1600×1200. - `isz:lt,islt:4mp` - Filter for images larger than 2272×1704. - `isz:ex,iszw:1000,iszh:1000` - Search for images exactly 1000×1000. Color Filters: - `ic:color` - Search for full-color images. - `ic:gray` - Search for black and white images. - `ic:specific,isc:red` (and other colors such as orange, yellow, green, etc.) - Search for images predominantly in specified colors. Image Type Filters: - `itp:face` - Search for images of faces. - `itp:photo` - Search for photographs. - `itp:clipart` - Search for clipart images. - `itp:lineart` - Search for line drawings. - `itp:animated` - Search for animated images (GIFs).
safeNoAdult Content Filtering option.
uuleNoThe encoded location parameter.
domainNoGoogle domain to use. Default is google.com.
filterNoDefines whether to enable or disable the filters for 'Similar Results' and 'Omitted Results'. Set to 1 (default) to enable these filters, or 0 to disable them.
locationNoGoogle canonical location for the search.
deviceTypeNoSpecify the device type for the search.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It describes what the tool does but does not disclose behavioral traits such as rate limits, authentication requirements, or any side effects of scraping. This is a significant gap for a scraping tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first captures the core function and parameters, the second lists distinct use cases. Every sentence adds value, and it is front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the return fields (title, URLs, dimensions, etc.) and mentions pagination via ijn. It covers the key aspects of the tool, though it lacks details on response format or pagination limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description paraphrases the filters and pagination but does not add substantial new meaning beyond what the schema already provides. It references tbs filters but does not explain them beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool scrapes Google Images for a query with advanced filters and lists the returned fields. It distinguishes itself from siblings by explicitly being an image search tool, unlike text or other media tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases such as visual-asset discovery, ML/CV dataset collection, and brand monitoring, giving clear context for when to use the tool. However, it does not mention when not to use it or explicitly call out alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_maps_contributor_reviews_getMapReviewsgoogle_maps_contributor_reviews: GET /AInspect

Get Map Contributor Reviews

Lists all Google Maps reviews authored by a specific Local Guide / contributor by contributorId, with language/country targeting and nextPageToken pagination. Returns per-review rating, text, date, place name, place address, placeId, photos, and owner responses. Use for reviewer reputation checks, detecting fake/bot review patterns, local-guide activity analysis, and building review-author profiles for trust scoring.

ParametersJSON Schema
NameRequiredDescriptionDefault
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
numNoNumber of results per page, ranging from 10 to 200.
contributorIdYesGoogle Maps Contributor ID.
nextPageTokenNoDefines the next page token. It is used for retrieving the next page results.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses pagination with nextPageToken and lists returned fields (rating, text, date, place name, etc.), but does not mention rate limits, data freshness, authentication requirements, or how many results per page beyond what the num parameter implies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that starts with the purpose, then enumerates returned fields and use cases. It is front-loaded and efficient, though slightly verbose with the list of use cases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately covers what is returned (rating, text, date, place name, photos, owner responses) and mentions pagination. It lacks explicit total count or error handling but is sufficient for a 5-param tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for all 5 parameters. The description text reiterates 'language/country targeting and nextPageToken pagination' but does not add significant meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Lists all Google Maps reviews authored by a specific Local Guide / contributor by contributorId', clearly distinguishing it from siblings like getMapReviews which likely fetches reviews for a place. It includes specific use cases such as reviewer reputation checks and fake review detection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases like 'reviewer reputation checks, detecting fake/bot review patterns, local-guide activity analysis, and building review-author profiles for trust scoring'. However, it does not explicitly state when not to use this tool or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_maps_photos_getMapPhotosgoogle_maps_photos: GET /AInspect

Get Place Photos

Fetches the photo gallery of a Google Maps place by dataId or placeId, paginated with nextPageToken and filterable by categoryId (all, latest, menu, by owner, videos, street view). Returns each photo with image URL, thumbnail, upload date, uploader, and photoId. Use for restaurant-menu extraction, venue/ambience visual audits, building rich place detail pages, and sourcing up-to-date imagery for POI listings.

ParametersJSON Schema
NameRequiredDescriptionDefault
hlNoThe two-letter language code for the language you want to use for the search.
dataIdNoGoogle Maps data ID. Either dataId or placeId should be set.
placeIdNoUnique reference to a place on Google Maps. Either dataId or placeId should be set.
categoryIdNoFilters photos by category.
nextPageTokenNoToken for fetching the next page of photos.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description details pagination via nextPageToken, filtering by categoryId with specific values, and the return structure (image URL, thumbnail, upload date, etc.). This clearly conveys behavior beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise paragraph that front-loads the main action. It includes necessary details without redundancy, though bullet points could enhance readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description fully explains the return data (URL, thumbnail, upload date, uploader, photoId) and covers pagination and filtering. All key aspects are addressed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter. The description adds value by specifying categoryId filter options (all, latest, menu, etc.) and explaining pagination mechanics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Get Place Photos' and explains it fetches photo gallery by dataId or placeId. It clearly distinguishes from sibling tools like getPlaceDetails and getMapReviews, which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit use cases are listed, such as restaurant-menu extraction and venue visual audits. However, it does not explicitly mention when not to use or contrast with other photo-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_maps_place_getPlaceDetailsgoogle_maps_place: GET /AInspect

Get Place Details

Fetches full Google Maps place data by placeId with optional domain/language localization. Returns name, address, coordinates, phone, website, categories, hours, rating, review count, price level, photos, popular times, attributes/amenities, plus_code, and map URL. Use for local SEO audits, POI enrichment, lead generation, competitor mapping, and building location-aware agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
hlNoThe two-letter language code for the language you want to use for the search.
domainNoGoogle domain to use. Default is google.com.
placeIdYesA unique identifier for the place. This ID can be obtained from Google Maps search results.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It lists the data returned but does not discuss rate limits, permissions, data freshness, or any side effects. However, it is read-only and not misleading.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose first, then a list of returned fields, then use cases. It is concise but not overly terse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description compensates by listing all returned fields. It addresses core use cases, though it could mention error handling or data structure details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds context about optional localization but does not significantly extend the parameter meanings beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Fetches') and resource ('full Google Maps place data by placeId'), and lists specific use cases like local SEO audits and POI enrichment, which effectively distinguishes it from sibling tools like search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (when you have a placeId) through the parameter explanation, but does not explicitly contrast with search or other tools, nor does it provide when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_maps_reviews_getMapReviewsgoogle_maps_reviews: GET /AInspect

Get Map Reviews

Paginated fetch of Google Maps reviews for a place by dataId or placeId, with sort (qualityScore, newestFirst, ratingHigh, ratingLow), topicId filter, and language. Returns per-review author name and profile link, star rating, text, published/relative date, likes count, owner response, attached photos, and local-guide flag. Use for reputation management, sentiment and topic mining, competitor review benchmarking, and feeding review data into summarization or trust-score LLMs.

ParametersJSON Schema
NameRequiredDescriptionDefault
hlNoThe two-letter language code for the language you want to use for the search.
dataIdNoGoogle Maps data ID.
sortByNoParameter used for sorting and refining results.
placeIdNoUnique reference to a place on a Google Map. Either dataId or placeId should be set.
topicIdNoDefines the ID of the topic you want to use for filtering reviews.
nextPageTokenNoDefines the next page token. It is used for retrieving the next page results.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It details the return fields (author, rating, text, etc.) and mentions pagination. It does not discuss rate limits or authentication, but as a read-only fetch, the disclosure is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear title, a paragraph on behavior, a list of return fields, and use cases. It is concise enough for the complexity and front-loads the purpose, though the first line repeats the title.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 6 parameters and no output schema, the description covers the tool's purpose, parameters, return fields, and use cases. It lacks explicit mention of the difference from the contributor reviews sibling, but overall is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining how sort options work (e.g., qualityScore), the use of topicId filter, and pagination token. It also groups parameters like dataId/placeId as alternatives.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches Google Maps reviews for a place by dataId or placeId, with pagination, sorting, and filtering options. It distinguishes from sibling tools (e.g., contributor reviews) by focusing on place reviews and listing specific features.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases such as reputation management, sentiment mining, competitor benchmarking, and feeding LLMs. It does not directly compare to the sibling contributor reviews tool but implies when to use this via the listed use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_maps_search_performMapSearchgoogle_maps_search: GET /AInspect

Get Google Maps Search Results

Runs a Google Maps search by keyword plus optional GPS coordinates (@lat,lng,zoomz via ll) with language, country, domain, and offset-based pagination (start). Returns the local pack list with placeId, name, address, coordinates, rating, review count, price level, categories, phone, website, hours, and thumbnail. Use for local lead generation, competitor density mapping, market expansion research, hyperlocal directories, and feeding placeIds into the Maps Place, Reviews, or Photos endpoints.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query term or phrase.
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
llNoGPS coordinates of the location where the search query is to be performed. This parameter is required if the 'start' parameter is present. The format for the `ll` parameter is `@` followed by latitude, longitude, and zoom level, separated by commas. The latitude and longitude should be in decimal degrees, and the zoom level is an integer. Example: `@40.7455096,-74.0083012,14z`.
startNoSpecifies the result offset for pagination purposes. The offset dictates the number of rows to skip from the beginning of the results. This is useful for accessing subsequent pages of search results. For example, an offset of 0 (the default value) returns the first page of results, 20 returns the second page, 40 returns the third page, and so on. This parameter is especially relevant when used in conjunction with the 'll' parameter for location-based searches.
domainNoGoogle domain to use. Default is google.com.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full disclosure responsibility. It describes the return format (local pack list with fields) and pagination via 'start'. It does not mention rate limits or authentication, but is transparent about what the tool does and returns. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with a clear title and a single paragraph. It front-loads the purpose and uses bullet-like listing in the last sentence. Every sentence adds value without unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters and no output schema, the description is fairly complete, listing return fields and use cases. However, it does not specify the structure of the output (e.g., array or object) or error handling, which is a notable gap for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds context about 'optional GPS coordinates' for `ll` and 'offset-based pagination' for `start`, but these are already in the schema. It does not significantly enhance meaning beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get Google Maps Search Results' and specifies it runs a search by keyword plus optional GPS coordinates. It lists the return fields and distinguishes itself from sibling tools like place details, reviews, and photos endpoints, making the purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly lists use cases such as local lead generation, competitor density mapping, market expansion research, and feeding placeIds into other endpoints. While it doesn't explicitly state when not to use it, the mention of alternatives (e.g., Maps Place, Reviews, Photos endpoints) provides implicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_serp_ai_mode_getAiModeResponsegoogle_serp_ai_mode: GET /AInspect

Get AI Mode SERP Results

Captures Gemini-powered AI Mode answers from Google Search. Returns the conversational response text, cited source links, subtopic breakdowns, follow-up suggestions, and a subsequentRequestToken for multi-turn continuation. Use for next-gen search interfaces, AI-answer monitoring, citation tracking, content research agents, building question-answering pipelines grounded in live Google results, and person/company data enrichment — e.g. asking Who is the CEO of HasData?, What is Roman Milyushkevich's LinkedIn?, HasData founder email, HasData Instagram handle to get a synthesized answer plus source URLs in one call, ideal for lead enrichment, sales research, people search, and filling in contact/attribute gaps for CRM records.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSpecify the search term for which you want to scrape the SERP.
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
uuleNoThe encoded location parameter.
locationNoGoogle canonical location for the search.
continuableNoWhether to continue an existing AI Mode conversation.
subsequentRequestTokenNoToken used to continue a previous AI Mode request.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully cover behavioral traits. It describes the return structure but does not mention auth requirements, rate limits, data freshness, idempotency, or safety (e.g., non-destructive nature). For a read operation, these are notable omissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that front-loads purpose and return components, then lists use cases. While informative, it could be more structured (e.g., bullet points) for easier scanning. Overall, it is concise given the amount of information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description adequately explains return values (conversational text, source links, subtopics, follow-ups, token). It covers the main output components but lacks details on error handling, pagination, or edge cases. Given the tool's complexity and 7 parameters, it is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds some context (e.g., subsequentRequestToken for multi-turn continuation, use of q for person queries) but does not significantly enhance understanding beyond the schema. It covers all parameters implicitly but not explicitly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get AI Mode SERP Results' capturing Gemini-powered AI Mode answers. It details specific return components (conversational response text, cited source links, subtopic breakdowns, follow-up suggestions, subsequentRequestToken) and provides concrete examples, clearly distinguishing it from sibling tools like standard Google Search or AI Overview.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives extensive use cases (e.g., next-gen search interfaces, AI-answer monitoring, citation tracking, person/company data enrichment) and examples (e.g., 'Who is the CEO of HasData?'). It implicitly suggests when to use this tool over others, but lacks explicit 'when not to use' guidance or comparison with alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_serp_ai_overview_getAiOverviewResponsegoogle_serp_ai_overview: GET /AInspect

Get AI Overview Results

Fetches the lazy-loaded Google AI Overview block via a pageToken returned by the Google SERP API (token valid for 4 minutes). Returns the AI-generated answer text, referenced source URLs, and expanded subtopic sections. Use as a follow-up call to Google SERP for tracking AI citations in SEO, fact-checking answers against sources, and LLM retrieval pipelines grounded in live Google results.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageTokenYesToken from `aiOverview` block in Google SERP API. Valid for 4 minutes.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses token validity of 4 minutes and describes return fields (answer text, sources, subtopic sections). No annotations provided, so description carries burden; lacks details on error handling or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with name and immediate purpose. Every sentence adds value with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes what the tool returns explicitly. Lacks mention of error behavior (e.g., expired token) but for a simple fetch tool with one parameter, it is sufficient without output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters with description. The description adds context: token source ('from `aiOverview` block') and validity period, enhancing schema info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches Google AI Overview results via a pageToken. It distinguishes from sibling Google SERP tools by noting it's a follow-up call and specifying SEO/LLM use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explains it is a follow-up to Google SERP API and lists use cases (SEO, fact-checking, LLM pipelines). Does not explicitly mention when not to use, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_serp_events_getEventInformationgoogle_serp_events: GET /AInspect

Get Google Events Results

Scrapes the Google Events vertical for a query plus location (or uule) with date filters (today, tomorrow, this/next week, weekend, this/next month), virtual-event toggle, domain/country/language targeting, and pagination. Returns event title, start date/time, venue name and address, ticket/source links, description, and thumbnail. Use for event-discovery chatbots, local aggregators, calendar sync, competitive monitoring of event listings, and pulling upcoming shows/conferences for a region.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSpecify the search term for which you want to scrape the SERP.
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
uuleNoThe encoded location parameter.
startNoThis parameter specifies the number of search results to skip and is used for implementing pagination. For example, a value of 0 (default) indicates the first page of results, 10 refers to the second page, and 20 to the third page.
domainNoGoogle domain to use. Default is google.com.
htichipsNoFilter parameter for refining event search results. Supports various filters for events. Multiple filters can be passed using a comma. The available filters are: - `date:today`: Today's Events - `date:tomorrow`: Tomorrow's Events - `date:week`: This Week's Events - `date:weekend`: This Weekend's Events - `date:next_week`: Next Week's Events - `date:month`: This Month's Events - `date:next_month`: Next Month's Events - `event_type:Virtual-Event`: Online Events For example, to filter for today's online events, use: `event_type:Virtual-Event,date:today`.
locationNoGoogle canonical location for the search.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It explains the tool scrapes the Google Events vertical, returns specific fields (title, date, venue, links, description, thumbnail), and mentions filters (date, virtual-event, domain/country/language, pagination). However, it does not disclose rate limits, authentication requirements, or whether it is read-only. The description is decent but lacks deeper behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, front-loaded with the title and a comprehensive overview of capabilities. It could be more concise by separating use cases, but it remains efficient and informative without unnecessary fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explicitly lists the returned fields (title, start date/time, venue, links, description, thumbnail), which is helpful. It covers key parameters and pagination. However, it omits potential limitations like result count or response size. Overall, it is fairly complete for a scraping tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description briefly summarizes the filters (date, virtual-event toggle, domain/country/language targeting, pagination) but adds little beyond the schema's detailed parameter descriptions (e.g., `htichips` already lists all filter options). The description does not provide new semantic value that the schema lacks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool scrapes Google Events results with specific filters, distinguishing it from sibling tools like general search, news, or shopping tools. The verb 'Get' and resource 'Google Events Results' are specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists explicit use cases: event-discovery chatbots, local aggregators, calendar sync, competitive monitoring, and pulling upcoming shows/conferences. It provides context for when to use this tool, though it does not explicitly state when not to use it or mention alternatives. However, given sibling tools, the purpose is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_serp_immersive_product_getImmersive_e29f691177google_serp_immersive_product: GET /AInspect

Get Immersive Product Information

Expands the Google Shopping Immersive Product pop-up given an immersiveProductPageToken from the Google Shopping API, with optional moreStores (up to ~13 merchants instead of 3–5) and nextPageToken for paginating stores. Returns multi-store offers (merchant, price, shipping, condition, URL), product specs, images, ratings, and the nextPageToken. Use for price-comparison bots, merchant discovery, dropshipping research, and aggregating full offer lists per product.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageTokenYesToken for displaying more product info in the Google immersive pop-up, available in the Google Shopping API response as the `immersiveProductPageToken` property.
moreStoresNoFetch additional store results in a single search. By default it returns 3–5 stores, and when true it returns up to 13 or the maximum available for the product.
nextPageTokenNoToken used to retrieve the next page of store results.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description adequately explains the behavior: it is a read-only operation that returns product details and paginated store offers. It specifies optional parameters like moreStores and nextPageToken, but lacks info on authentication or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loaded with the core purpose, and uses a single effective paragraph. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately covers return data (multi-store offers, specs, images, ratings, nextPageToken). It fully addresses the tool's complexity with 3 parameters and clear use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds context by explaining the token source, the effect of moreStores (up to ~13 merchants vs 3-5), and nextPageToken for pagination. This enhances understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it expands the Google Shopping Immersive Product pop-up, specifying the required token and return data. It distinguishes itself from siblings like product search tools by focusing on immersive product information with multi-store offers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit use cases are listed (price-comparison bots, merchant discovery, dropshipping research, aggregating offer lists), providing clear guidance. However, it does not mention when not to use or directly compare to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_serp_news_getGoogleNewsgoogle_serp_news: GET /AInspect

Get Google News Results

Retrieves Google News results by free-text query, topicToken (World, Business, Technology, etc.), sectionToken, publicationToken (e.g. CNN, BBC), or storyToken (full-coverage cluster with sort by relevance/date). Returns article title, snippet, source publisher, published date, thumbnail, and URL, plus tokens for navigating topics, sub-sections, and story clusters. Use for news monitoring, brand/PR tracking, topical aggregators, publisher-specific feeds, and drilling into full story coverage.

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoFree-text query as used on news.google.com. Not allowed with `topicToken`, `storyToken`, or `publicationToken`.
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
soNoSort order for articles in a story. Use only with storyToken.
storyTokenNoToken for a single news story cluster (the “Full coverage” page).
topicTokenNoToken for a Google News topic such as World, Business, or Technology. Not allowed with `q`, `storyToken`, or `publicationToken`.
sectionTokenNoToken for a sub-section under a topic, for example Business → Economy. Use only when `topicToken` or `publicationToken` is present.
publicationTokenNoToken for a specific publisher such as CNN or BBC. Not allowed with `q`, `storyToken`, or `topicToken`.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the return fields (title, snippet, source, date, thumbnail, URL) and navigation tokens. It is transparent about the read-only nature, though it does not mention rate limits or pagination. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (5 sentences) and front-loaded with purpose. Each sentence adds value, explaining tokens and use cases without redundancy. Efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 8 parameters and no output schema, the description covers the main use cases (news monitoring, brand tracking, aggregators, etc.) and explains the tokens. It omits pagination details but is otherwise complete for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, baseline is 3. The description adds value by explaining how parameters interact (e.g., sectionToken only with topicToken or publicationToken). This goes beyond the schema's individual descriptions, aiding the agent in correct usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get Google News Results' and elaborates on retrieval via free-text query or various tokens (topic, section, publication, story). It distinguishes this tool from sibling tools like general web search, images, shopping, etc., by specifying it's for Google News.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance on when to use each parameter and their restrictions (e.g., 'Not allowed with q, storyToken, or publicationToken'). It implies use for news-specific queries but does not explicitly contrast with sibling tools. Adequate for an informed agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_serp_product_getProductInformationgoogle_serp_product: GET /AInspect

Get Product Information

Pulls detailed product data from Google Shopping by productId with searchType (offers, specs, reviews) and rich filters (free shipping, used-condition, sort by price/total price/deals/seller rating, reviews count). Returns product title, images, price, ratings, specs, merchant offers (seller, shipping, condition, total price), and review text depending on searchType. Use for price intelligence, catalog enrichment, review mining, competitor spec comparison, and building shopping assistants that surface the cheapest or highest-rated offer.

ParametersJSON Schema
NameRequiredDescriptionDefault
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
uuleNoThe encoded location parameter.
startNoThis parameter specifies the number of search results to skip and is used for pagination. For example, a value of 0 (default) indicates the first page of results, 10 refers to the second page, and 20 to the third page. This parameter is applicable only when `searchType=offers` is specified. For reviews pagination use `filter` parameter.
domainNoGoogle domain to use. Default is google.com.
filterNoFilter parameter for refining search results. Supports various filters for offers and reviews. Multiple filters can be passed using a comma. The available filters are: Offers filters: - `freeship:1`: Show only products with free shipping. - `ucond:1`: Show only used products. - `scoring:p`: Sort by base price. - `scoring:tp`: Sort by total price. - `scoring:cpd`: Sort by current promotion deals (special offers). - `scoring:mrd`: Sort by seller's rating. Reviews filters: - `rnum:{number}`: Number of results (100 is max).
locationNoGoogle canonical location for the search.
productIdYesThe product ID to get results for.
searchTypeNoParameter for fetching specific product information, such as 'offers', 'specs', or 'reviews'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the data returned (title, images, price, ratings, etc.) and the parameters that affect behavior, but does not mention rate limits, authorization, or idempotency. The description is adequate for a read-only tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loading the purpose and then elaborating on capabilities. It is informative without verbosity, though the second sentence is somewhat dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the description mentions key return fields (title, images, price, etc.). It covers all major aspects of the tool's functionality given its complexity, though it could be slightly more detailed on filter combinations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). The description adds value by explaining how searchType and filter parameters work, listing example filters and their effects, which goes beyond the schema's enum descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves detailed product data from Google Shopping by productId, with searchType and filters. It distinguishes itself from sibling tools like general search or shopping search by focusing on product information for a specific ID.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases ('price intelligence, catalog enrichment, review mining, competitor spec comparison') but does not explicitly state when not to use or mention alternative tools among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_serp_serp_getSearchResultsgoogle_serp_serp: GET /AInspect

Get Google Search Results

Full-featured Google Search scraper with location/uule, country (gl), language (hl, lr), domain, device type, safesearch, time/date filters (qdr, cdr), knowledge-graph IDs, and tbm vertical selection (images, videos, news, shopping, local), plus offset/num pagination. Returns organic results (title, link, snippet, position), ads, knowledge graph, related searches, People Also Ask, local pack, featured snippets, AI Overview pageToken, and rich SERP features. Use for SEO rank tracking, keyword research, SERP-feature monitoring, competitor analysis, grounding LLMs with fresh location-aware search data, and especially for person/company data enrichment — e.g. finding a person's LinkedIn/Instagram/Twitter profile (Roman Milyushkevich LinkedIn, HasData Instagram), a company's CEO/founder/leadership (HasData CEO, HasData founder), contact emails (Roman Milyushkevich HasData email), phone numbers, GitHub profiles, press mentions, or any public attribute of a person or business by running a targeted query and parsing the top organic results.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSpecify the search term for which you want to scrape the SERP.
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
lrNoThe 'lr' parameter specifies the language of the websites to return results from. This parameter filters results based on the language of the web content.
siNoGoogle Cached Search Parameters ID.
numNoNumber of results per page, ranging from 10 to 100.
tbmNoSpecify the type of search.
tbsNoThis parameter supports various filters that can be combined by separating them with a comma. Here are examples of these filters: - Specific Time Range: `cdr:1,cd_min:10/17/2018,cd_max:3/8/2021` - Filter results to show only those within the defined date range. - Sort by Date: `sbd:1` - Results are sorted by date, from the most recent to the oldest. - Sort by Relevance: `sbd:0` - Results are sorted by relevance to the search query. - Sites with Images: `img:1` - Only show results from webpages that contain images. Quick Date Range (qdr): - `qdr:h` - Show results from the past hour. - `qdr:d` - Limit results to the past day. - `qdr:w` - Filter results from the week. - `qdr:m` - Display results from the past month. - `qdr:y` - Show results from the past year. - `qdr:h10`, `qdr:d10`, `qdr:w10`, `qdr:m10`, `qdr:y10` - Specify a number to show results from the last 10 hours, days, weeks, months, or years respectively. These filters enhance the control over search results, allowing for precise retrieval of information based on specific criteria.
lsigNoAdditional Google Place ID.
nfprNoControls if auto-corrected results are shown. 0 includes them (default), 1 shows only the original query. Google may still return auto-corrected results if no others are available.
safeNoAdult Content Filtering option.
uuleNoThe encoded location parameter.
kgmidNoGoogle Knowledge Graph ID.
startNoThis parameter specifies the number of search results to skip and is used for implementing pagination. For example, a value of 0 (default) indicates the first page of results, 10 refers to the second page, and 20 to the third page. For Google Local Results, the start value must be in multiples of 20, such as 20 for the second page, 40 for the third page, etc.
domainNoGoogle domain to use. Default is google.com.
filterNoDefines whether to enable or disable the filters for 'Similar Results' and 'Omitted Results'. Set to 1 (default) to enable these filters, or 0 to disable them.
ludocidNoThe Google Place ID for a specific location.
locationNoGoogle canonical location for the search.
deviceTypeNoSpecify the device type for the search.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It details the return includes organic results, ads, knowledge graph, and other rich features. However, it omits behavioral details like authentication, rate limits, or cost.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is verbose, listing many capabilities and use cases. While well-organized, it could be more concise to improve readability. The front-loading of the purpose is good, but the bulk of text may overwhelm an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 19 parameters and no output schema, the description compensates by describing return features and many use cases. However, it lacks explanations of parameter interactions (e.g., location vs uule) and does not fully cover output format details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds extra context for parameters like tbs (with examples), start (pagination note), and use cases for person/company enrichment, enhancing understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Get Google Search Results' and elaborates with a comprehensive list of capabilities, including organic results, ads, knowledge graph, and more. It distinguishes itself from siblings like the 'light' version and other specialized SERP tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases such as SEO rank tracking, keyword research, and person/company data enrichment. However, it does not explicitly state when not to use this tool or directly compare to sibling tools like the light version or news-specific tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_serp_serp_light_getSearchResultsgoogle_serp_serp_light: GET /AInspect

Get Google Light Search Results

Lightweight Google Search scraper that returns only organic results and basic pagination, omitting AI Overview, knowledge graph, PAA, and other rich SERP blocks for faster, cheaper responses. Supports location/uule, country (gl), language (hl/lr), domain, safesearch, and time/date filters (qdr, cdr) with offset/num pagination. Returns title, link, snippet, and position per result. Use for high-volume keyword monitoring, bulk rank tracking, backlink discovery, and any workflow where only the ten blue links matter.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSpecify the search term for which you want to scrape the SERP.
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
lrNoThe 'lr' parameter specifies the language of the websites to return results from. This parameter filters results based on the language of the web content.
numNoNumber of results per page, ranging from 10 to 100.
tbsNoThis parameter supports various filters that can be combined by separating them with a comma. Here are examples of these filters: - Specific Time Range: `cdr:1,cd_min:10/17/2018,cd_max:3/8/2021` - Filter results to show only those within the defined date range. - Sort by Date: `sbd:1` - Results are sorted by date, from the most recent to the oldest. - Sort by Relevance: `sbd:0` - Results are sorted by relevance to the search query. - Sites with Images: `img:1` - Only show results from webpages that contain images. Quick Date Range (qdr): - `qdr:h` - Show results from the past hour. - `qdr:d` - Limit results to the past day. - `qdr:w` - Filter results from the week. - `qdr:m` - Display results from the past month. - `qdr:y` - Show results from the past year. - `qdr:h10`, `qdr:d10`, `qdr:w10`, `qdr:m10`, `qdr:y10` - Specify a number to show results from the last 10 hours, days, weeks, months, or years respectively. These filters enhance the control over search results, allowing for precise retrieval of information based on specific criteria.
safeNoAdult Content Filtering option.
uuleNoThe encoded location parameter.
startNoThis parameter specifies the number of search results to skip and is used for implementing pagination. For example, a value of 0 (default) indicates the first page of results, 10 refers to the second page, and 20 to the third page. For Google Local Results, the start value must be in multiples of 20, such as 20 for the second page, 40 for the third page, etc.
domainNoGoogle domain to use. Default is google.com.
filterNoDefines whether to enable or disable the filters for 'Similar Results' and 'Omitted Results'. Set to 1 (default) to enable these filters, or 0 to disable them.
locationNoGoogle canonical location for the search.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It explains the lightweight nature, what is omitted, and lists supported filters (location, country, language, domain, safesearch, time/date, pagination). It does not mention rate limits or auth but covers key behavioral traits for a scraper tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise paragraphs with no wasted words. The first paragraph explains what the tool does and what it omits; the second lists use cases. It is front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description covers what is returned: 'title, link, snippet, and position per result.' It also explains filtering and pagination behavior. It does not document error handling or quotas, but for a lightweight scraper with 12 params (1 required), it is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (all 12 parameters have descriptions in the schema). The description does not add significant new meaning beyond what the schema provides; it summarizes categories of filters (location/uule, country, language, etc.) but the schema already describes each param individually. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Get Google Light Search Results' and specifies that it returns only organic results and basic pagination, omitting rich SERP blocks like AI Overview, knowledge graph, PAA. This differentiates it from the sibling tool 'hasdata_google_serp_serp_getSearchResults' which presumably returns full SERP.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to use this tool 'for high-volume keyword monitoring, bulk rank tracking, backlink discovery, and any workflow where only the ten blue links matter.' This implies when to use it and when not to (when rich results are needed). It does not explicitly name alternative tools but context signals include a full SERP sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_serp_shopping_getSearchResultsgoogle_serp_shopping: GET /AInspect

Get Shopping Search Results

Scrapes Google Shopping listings for a query with location/uule, country/language/domain, time/date filters, device type, shoprs filter-helper IDs, and offset pagination. Returns product title, price, merchant/source, rating, reviews count, thumbnail, product link, productId, immersiveProductPageToken, and filter chips with hasdata_link for refining by brand/price/condition/promotions. Use for e-commerce price tracking, catalog building, promotion discovery, and feeding productIds into the Product API or tokens into the Immersive Product API for deeper data.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSpecify the search term for which you want to scrape the SERP.
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
tbsNoThis parameter supports various filters that can be combined by separating them with a comma. Here are examples of these filters: - Specific Time Range: `cdr:1,cd_min:10/17/2018,cd_max:3/8/2021` - Filter results to show only those within the defined date range. - Sort by Date: `sbd:1` - Results are sorted by date, from the most recent to the oldest. - Sort by Relevance: `sbd:0` - Results are sorted by relevance to the search query. - Sites with Images: `img:1` - Only show results from webpages that contain images. Quick Date Range (qdr): - `qdr:h` - Show results from the past hour. - `qdr:d` - Limit results to the past day. - `qdr:w` - Filter results from the week. - `qdr:m` - Display results from the past month. - `qdr:y` - Show results from the past year. - `qdr:h10`, `qdr:d10`, `qdr:w10`, `qdr:m10`, `qdr:y10` - Specify a number to show results from the last 10 hours, days, weeks, months, or years respectively. These filters enhance the control over search results, allowing for precise retrieval of information based on specific criteria.
uuleNoThe encoded location parameter.
startNoThis parameter specifies the number of search results to skip and is used for implementing pagination. For example, a value of 0 (default) indicates the first page of results, 40 refers to the second page, and 80 to the third page.
domainNoGoogle domain to use. Default is google.com.
shoprsNoSpecifies the helper ID for applying search filters. Must be used with the updated `q` parameter, which includes the selected filter (e.g., Coffee sale). To apply filters, use the `hasdata_link` from `filters[index].options[index]` in the JSON. Apply multiple filters by following each `hasdata_link` one by one. To remove a filter, follow its specific `hasdata_link`.
locationNoGoogle canonical location for the search.
deviceTypeNoSpecify the device type for the search.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It explains the return data and filter mechanics, but omits details like authentication, rate limits, or error conditions. The behavioral disclosure is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured paragraph that front-loads the purpose, then details parameters and use cases. Every sentence provides meaningful information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 10 parameters and no output schema, the description covers the key return fields and filter usage, but lacks detailed response structure (e.g., types, nesting) and pagination handling beyond the `start` parameter. It is sufficient for basic use but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds value by explaining the `shoprs` parameter with filter chip usage and `hasdata_link` navigation, and by providing examples for `tbs`. This extra context justifies a higher score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as retrieving Google Shopping search results for a query, specifies the resource (Google Shopping listings), and details the return fields and use cases. It distinguishes from siblings like general web search tools by focusing specifically on shopping data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists use cases (price tracking, catalog building, etc.) and mentions feeding data into Product APIs, but it does not explicitly state when to use this tool versus alternative sibling tools (e.g., general SERP or product info tools). The guidance is implied rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_serp_short_videos_getShortVideosSearchResultsgoogle_serp_short_videos: GET /AInspect

Get Short Videos Search Results

Scrapes the Google Short Videos carousel (TikTok, YouTube Shorts, Instagram Reels, etc.) for a query with location/uule, country (gl/cr), language (hl/lr), device type, and page-based pagination. Returns video title, thumbnail, duration, source platform, channel/creator, publish date, and direct video URL. Use for short-form content discovery, viral-trend monitoring, influencer research, cross-platform video aggregation, and sourcing short clips to summarize or embed in LLM responses.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query term for retrieving short videos results.
crNoThe country code for the country you want to limit the search to.
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
lrNoThe 'lr' parameter specifies the language of the websites to return results from. This parameter filters results based on the language of the web content.
pageNoPage number for paginated results, where 0 is the first page.
uuleNoThe encoded location parameter.
locationNoGoogle canonical location for the search.
deviceTypeNoSpecify the device type for the search.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes what is returned (title, thumbnail, etc.) but lacks details on rate limits, authentication, data freshness, and pagination behavior beyond 'page-based pagination'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the purpose and uses a single paragraph for details. It efficiently lists use cases without fluff, though it could be more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 9 parameters and no output schema, the description explains return fields and use cases. However, it does not specify pagination details or response structure, leaving some gaps for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are documented in the schema. The description adds no additional parameter semantics beyond what the schema already provides, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get Short Videos Search Results' and explains it scrapes the Google Short Videos carousel for specific platforms, distinguishing it from general SERP tools and other sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides specific use cases like short-form content discovery, viral-trend monitoring, and influencer research, but does not explicitly state when not to use it or mention alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_google_travel_flights_getGoogleFlightsgoogle_travel_flights: GET /AInspect

Get Google Flights Results

Searches Google Flights for one-way, round-trip, or multi-city itineraries with passenger mix (adults, children, infants in-seat/on-lap), travel class, bags, max price, sort order (price, duration, emissions, departure/arrival time), stops, include/exclude airlines and connections, time windows, layover duration, and deep-search mode. Returns per-itinerary price, currency, total duration, stops, flight legs with airline, flight number, aircraft, departure/arrival airports and times, CO2 emissions, plus booking and departure tokens for round-trip returns or booking options. Use for travel-planning agents, fare monitoring, corporate travel dashboards, emission-aware trip optimization, and comparing routes and airlines across markets.

ParametersJSON Schema
NameRequiredDescriptionDefault
glNoThe two-letter country code for the country you want to limit the search to.
hlNoThe two-letter language code for the language you want to use for the search.
bagsNoNumber of carry-on bags per passenger.
typeNoSpecifies the type of flight. Options: - `roundTrip` (default) - `oneWay` - `multiCity` (requires `multiCityJson` for flight details) For round trips, retrieve return flight details with a separate request using `departureToken`.
stopsNoRestrict the number of stops (layovers) in the flight itinerary.
adultsNoNumber of adult passengers (>= 1 if specified).
sortByNoSort the flight results based on price, departure time, arrival time, etc.
childrenNoNumber of child passengers.
currencyNoParameter defines the currency of the returned prices
maxPriceNoMaximum price limit for the flight search, in the selected currency.
arrivalIdYesSpecifies the arrival airport code (IATA) or location kgmid. - **IATA Code**: A 3-letter uppercase code (e.g., `SFO` for San Francisco, `LHR` for London Heathrow). Search on [IATA](https://www.iata.org/en/publications/directories/code-search). - **Location kgmid**: A string starting with `/m/`, found in Wikidata under "Freebase ID" (e.g., `/m/02_286` for New York, NY). Multiple values can be separated by commas (e.g., `JFK,LGA,/m/0hptm`).
deepSearchNoEnable deep search. Returns the same results as Google Flights in a browser, but takes longer to respond. Default is `false`.
returnDateNoThe return travel date in 'yyyy-MM-dd' format. Required when **type** is `roundTrip`.
showHiddenNoIndicates whether to include hidden options in the results.
departureIdYesSpecifies the departure airport code (IATA) or location kgmid. - **IATA Code**: A 3-letter uppercase code (e.g., SFO for San Francisco, LHR for London Heathrow). Search on [IATA](https://www.iata.org/en/publications/directories/code-search). - **Location kgmid**: A string starting with `/m/`, found in Wikidata under "Freebase ID" (e.g., `/m/02_286` for New York, NY). Multiple values can be separated by commas (e.g., `JFK,LGA,/m/0hptm`).
maxDurationNoThe maximum total flight duration in minutes.
returnTimesNoSet up to 4 time boundaries (2 for departure, 2 for arrival) to filter return flights. Each number represents the start of an hour. Examples: - `6,20` → 6:00 AM - 9:00 PM departure - `1,15` → 1:00 AM - 4:00 PM departure - `7,18,2,21` → 7:00 AM - 9:00 PM departure, 2:00 AM - 10:00 PM arrival
travelClassNoThe travel class for the flight (Economy, Premium Economy, Business, or First).
bookingTokenNoUsed to request booking options for selected flights. This token is found in the flight results and cannot be used with `departureToken`.
infantsOnLapNoNumber of infants sitting on an adult's lap.
outboundDateYesThe outbound travel date in 'yyyy-MM-dd' format.
infantsInSeatNoNumber of infants occupying seats.
lessEmissionsNoPrefer flight options with lower carbon emissions.
multiCityJsonNoThis parameter specifies flight details for multi-city trips. It is a JSON string containing multiple flight objects. Each object must include the following fields: - **departureId** – The departure airport code or location KGMID. Uses the same format as the main `departureId` parameter. - **arrivalId** – The arrival airport code or location KGMID. Uses the same format as the main `arrivalId` parameter. - **date** – The flight date. Uses the same format as the `outboundDate` parameter. - **times** *(optional)* – The time range for the flight. Uses the same format as the `outboundTimes` parameter.
outboundTimesNoSet up to 4 time boundaries (2 for departure, 2 for arrival) to filter flights. Each number represents the start of an hour. Examples: - `6,20` → 6:00 AM - 9:00 PM departure - `1,15` → 1:00 AM - 4:00 PM departure - `7,18,2,21` → 7:00 AM - 9:00 PM departure, 2:00 AM - 10:00 PM arrival
departureTokenNoUsed to select a flight and retrieve return flights for a round trip or the next leg of the itinerary for a multi-city trip.
excludeAirlinesNoA comma separated list of airline codes to exclude from results. You can search for airline codes on [IATA](https://www.iata.org/en/publications/directories/code-search). For example, `UA` is United Airlines.
includeAirlinesNoA comma separated list of airline codes to exclusively include in results. You can search for airline codes on [IATA](https://www.iata.org/en/publications/directories/code-search). For example, `UA` is United Airlines. `excludeAirlines` and `includeAirlines` parameters can't be used together.
layoverDurationNoSet the maximum layover duration in minutes to filter flights. For example, `120, 360` filters layovers between 2 hours and 6 hours, while `45, 180` allows layovers from 45 minutes to 3 hours.
excludeConnectionsNoA comma separated list of specific airports to exclude as connections.
includeConnectionsNoA comma separated list of specific airports to allow as connections.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description mentions that deepSearch takes longer but does not disclose other behavioral traits like rate limits, authentication needs, or side effects. Without annotations, the description carries full burden; it covers basic behavior but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with a clear purpose and flows into parameter details. It is somewhat verbose but efficiently covers all key aspects without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (31 parameters, no output schema), the description explains input parameters and return values comprehensively, including price, legs, CO2 emissions, and tokens. It is complete for an agent to understand inputs and outputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds context by listing parameter types in prose (e.g., 'passenger mix, travel class, bags, max price, sort order'), enhancing understanding beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get Google Flights Results' and elaborates on specific flight search types and parameters. It distinguishes itself from sibling tools as the only flight-related tool, making purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases such as 'travel-planning agents, fare monitoring, corporate travel dashboards, emission-aware trip optimization' but does not specify when not to use or mention alternative tools, though none exist among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_indeed_job_getJobDetailsindeed_job: GET /AInspect

Get Indeed Job Details

Fetches a single Indeed job posting by its viewjob URL. Returns job title, company, location, salary/compensation, employment type, posted date, full description, requirements/benefits, and apply URL. Use for ATS/CRM enrichment, compensation benchmarking, resume-to-JD matching with LLMs, and structured archival of postings discovered via the Indeed Listing endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the job vacancy to retrieve details for.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description does not disclose any behavioral traits beyond functionality (e.g., rate limits, authentication, read-only nature). It lists returned fields, providing some transparency, but lacks detail on constraints or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a clear paragraph with front-loaded purpose statement. It is slightly wordy but contains no redundancy. Could be shortened without losing meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter fetch tool, the description lists all returned fields and use cases. No output schema exists, but the field list compensates. Missing details like URL format or error handling, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (one parameter 'url' with description). The description reiterates that the URL is for the job vacancy but adds no additional semantics or format hints beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool fetches a single Indeed job posting by viewjob URL and lists specific return fields (title, company, location, salary, etc.). Distinguishes itself from the sibling listing tool by specifying 'single job posting' and referencing the 'Indeed Listing endpoint'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes use cases such as ATS/CRM enrichment and compensation benchmarking. However, does not explicitly state when not to use it or mention alternatives beyond the listing endpoint. The context is implied from the single-URL input.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_indeed_listing_getJobListingsindeed_listing: GET /AInspect

Get Indeed Job Listings

Searches Indeed job listings by keyword and location with sort (relevance/date), country domain targeting, and offset-based pagination (start). Returns an array of jobs with title, company, location, salary, posted date, job URL, and jobKey for the requested page. Use for job-market dashboards, role/geo hiring-trend analysis, sourcing pipelines, and generating URL lists to feed into the Indeed Job endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoThe sorting option for the search results.
startNoThe starting index of the results to retrieve (optional).
domainNoThe domain of the Indeed site (optional).
keywordYesThe keyword used to search for job listings.
locationYesThe location to search for job listings.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the return structure (array with specific fields) and pagination via the 'start' parameter. However, it does not disclose any behavioral traits like rate limits, authentication requirements, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the tool's name and purpose, then provides a concise paragraph. It is structured and each sentence adds relevant information without being overly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description compensates by listing returned fields (title, company, location, salary, etc.) and explaining pagination. It covers key use cases and parameters adequately for a search endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds minimal extra value beyond the schema, briefly explaining the purpose of sort, start, and domain but not providing new semantic details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns Indeed job listings with search by keyword and location, and mentions sorting and pagination. It distinguishes from sibling tool getJobDetails by specifying it returns a list rather than details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases (e.g., job-market dashboards, hiring-trend analysis) and mentions generating URL lists for the Indeed Job endpoint. It implies when to use this tool vs. getJobDetails but lacks explicit exclusion statements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_instagram_profile_getInstagramProfileinstagram_profile: GET /AInspect

Get Instagram Profile

Fetches a public Instagram profile by username (handle) and returns full name, biography, external link, profile picture URL, followers count, following count, posts count, verified/private flags, and category. Use to enrich CRM/lead records, verify influencer reach before outreach, monitor competitor accounts, or build datasets of creator metadata for vetting and analytics.

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesThe Instagram username of the profile you want to scrape, without the `@` symbol.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Although no annotations are provided, the description clearly indicates it fetches a public profile and lists returned fields. It does not mention rate limits or errors, but for a read-only tool, the transparency is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, with two short paragraphs front-loading the purpose and then listing use cases. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description enumerates all returned fields and provides practical use cases, making it complete for this simple tool with one parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the handle parameter well-defined. The description only restates 'by username (handle)' without adding significant new information beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches a public Instagram profile by username, lists specific data returned (full name, biography, etc.), and is distinct from sibling tools which cover other platforms like Airbnb, Amazon, and Google.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases such as enriching CRM records, verifying influencer reach, monitoring competitors, and building datasets, offering clear guidance on when to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_redfin_listing_getRealEstateListingsredfin_listing: GET /AInspect

Get Redfin Real Estate Listings

Searches Redfin for-sale, for-rent, or sold listings by zipcode with pagination. Returns each listing with address, Redfin URL, list price, beds/baths, square footage, lot size, year built, days on market, status, coordinates, photos, MLS number, and HOA. Use for real-estate market research, lead generation for agents, price/DOM trend analysis, and feeding URLs into the Redfin Property endpoint for deep-dive details.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoThe page number of the results to retrieve.
typeYesThe type of listing.
keywordYesThe zipcode used to search for listings.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It mentions pagination and the types of listings but does not cover aspects like rate limits, authentication, data freshness, or whether it performs web scraping. The lack of behavioral details is a gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately concise, starting with the core purpose and expanding with return fields and use cases. It could be slightly shorter but every sentence contributes value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (3 parameters, no output schema), the description lists return fields comprehensively and explains use cases. It does not specify output structure, but it is adequate for the stated purposes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for all three parameters. The description adds context (e.g., zipcode, pagination) but does not provide additional semantics beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves Redfin real estate listings, specifies the search parameters (zipcode, listing type, pagination), and lists the return fields. It distinguishes from the sibling hasdata_redfin_property_getPropertyDetails by mentioning deeper detail retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases (market research, lead generation, trend analysis) and suggests feeding URLs into the Redfin Property endpoint for deeper details. However, it does not explicitly exclude scenarios or compare with other listing tools like Zillow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_redfin_property_getPropertyDetailsredfin_property: GET /AInspect

Get Redfin Property Details

Fetches the full Redfin property page by URL. Returns address, list/sold price, price history, Redfin Estimate, beds/baths, square footage, lot size, year built, property type, HOA, days on market, school ratings, tax history, listing agent, full description, photos, walk/transit/bike scores, and nearby comparables. Use for CMA reports, investor due-diligence, valuation models, listing enrichment, and powering buyer-assistant agents with verified property facts.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the property on Redfin. Must be a valid Redfin property URL.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool is a read operation (fetches) and lists output fields, but does not disclose potential error conditions, authentication needs, rate limits, or any side effects. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that front-loads the purpose and uses a bullet-like list for data fields. It is concise and avoids unnecessary words, though it could be more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one simple parameter and no output schema, the description covers the main purpose, input, and output comprehensively. It lacks details on error handling or limitations, but for a straightforward tool it is quite complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description of the 'url' parameter. The description adds the context that it fetches by URL, but does not significantly extend beyond the schema description. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fetches detailed Redfin property data by URL, listing many specific fields. It distinguishes from sibling tools like hasdata_redfin_listing_getRealEstateListings (which likely searches/listings) and other property detail tools from Zillow, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit use cases (CMA reports, due-diligence, valuation models, etc.) which inform when to use. However, it does not explicitly state when not to use or compare with alternatives, though the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_shopify_collections_getCollectionsshopify_collections: GET /AInspect

Get Shopify Store Collections

Lists collections from any public Shopify storefront URL with limit (up to 250) and page pagination. Returns each collection's id, title, handle, body_html description, image, and timestamps. Use the returned handles as input to the Shopify Products endpoint to enumerate category-specific catalogs, or to map a competitor's merchandising taxonomy and track collection changes over time.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the Shopify store. For example, 'https://b2bdemoexperience.myshopify.com'.
pageNoThe page number of the results to retrieve. Must be a positive integer.
limitNoThe maximum number of collections to retrieve. Must be between 1 and 250.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the operation is read-only ('Lists collections'), describes pagination (limit, page), and mentions returned fields. However, it does not address rate limits, authentication requirements, or error handling, which is adequate but not exceptional.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two clear paragraphs. The first states the core action and parameters; the second provides usage guidance. No superfluous sentences, every part adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description fully lists returned fields (id, title, handle, etc.) and links to the sibling tool for products. It covers pagination, parameters, and use cases comprehensively for a simple listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with each parameter described. The description adds marginal value beyond the schema (e.g., 'up to 250' for limit), but does not significantly enhance understanding. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Lists collections from any public Shopify storefront URL', using a specific verb and resource. It also distinguishes from sibling tools by noting that returned handles can be used for the Shopify Products endpoint, providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases: 'Use the returned handles as input to the Shopify Products endpoint... or to map a competitor's merchandising taxonomy'. It clearly indicates when to use the tool, though it does not explicitly state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_shopify_products_getProductsshopify_products: GET /AInspect

Get Shopify Store Products

Pulls products from any public Shopify storefront URL, optionally filtered by a collection handle, with limit (up to 250) and page pagination. Returns product id, title, handle, vendor, product_type, tags, body_html, images, variants with prices/SKUs/inventory status, and timestamps. Use for competitive price monitoring, catalog mirroring, availability tracking, building product datasets for comparison shopping, or feeding structured SKU data into downstream analytics and dropshipping pipelines.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the Shopify store. For example, 'https://b2bdemoexperience.myshopify.com'.
pageNoThe page number of the results to retrieve. Must be a positive integer.
limitNoThe maximum number of products to retrieve. Must be between 1 and 250.
collectionNoThe handle of the collection to filter the products. Provide the collection handle as a string.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden. It describes the return fields, pagination (page/limit up to 250), and optional collection filter. It is transparent about being a read operation on public URLs, but lacks details on rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear first sentence stating the purpose, followed by details. It is slightly verbose but not excessively so, and all sentences add value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description lists output fields and explains pagination, filter, and use cases. It is adequately complete for a data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage. The description adds context like 'up to 250' for limit but mainly repeats schema info. It provides use case context but no new parameter details beyond what the schema already explains.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get Shopify Store Products' and explains pulling products from public Shopify store URLs with optional filters. However, it does not explicitly differentiate from the sibling tool `hasdata_shopify_collections_getCollections`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists several use cases (competitive price monitoring, catalog mirroring, etc.) but does not provide explicit guidance on when to use this tool versus alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_web_scraping_web_scraping_scrapeWebPageweb_scraping_web_scraping: POST /AInspect

Scrape Web Page

Universal web scraper that fetches any public URL through managed proxies (datacenter or residential, geo-targeted) with optional JS rendering, custom headers, wait conditions, jsScenario actions (click, scroll, fill, waitFor), screenshots, resource/ad/URL blocking, and extractRules/aiExtractRules for LLM-driven structured extraction. Returns HTML, text, markdown, and/or JSON along with status code, extracted emails and links, CSS-selector extractions, and AI-structured fields per schema. Use as a fallback/universal fetcher for sites without a dedicated API, for scraping JS-heavy SPAs, bypassing bot protections, capturing screenshots, or producing clean markdown/structured JSON to feed downstream parsers, RAG pipelines, or data warehouses.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the web page to scrape.
waitNoTime in milliseconds to wait after the page load.
headersNoOptional custom headers to send with the request.
waitForNoCSS selector to wait for before scraping.
blockAdsNoWhether to block ads.
blockUrlsNoList of URLs to block.
proxyTypeNoType of proxy to use.
jsScenarioNoEnables custom JavaScript interactions on the target webpage during scraping. It's an array where each object defines a specific action or step. These actions can include clicking elements, waiting for elements, executing custom scripts, and more. Key actions within this field include: - `evaluate`: Run custom JavaScript code on the page. - `click`: Click on an element specified by a CSS selector. - `wait`: Pause for a set duration (in milliseconds). - `waitFor`: Delay until a specific element appears. - `waitForAndClick`: Combine waiting for an element and then clicking it. - `scrollX`, `scrollY`: Scroll to specified positions on the page. - `fill`: Enter values into input fields identified by CSS selectors. Actions are executed sequentially.
screenshotNoWhether to take a screenshot of the page.
excludeTagsNoThe `excludeTags` parameter accepts an array of valid CSS selectors. Elements matching these selectors will be removed from the final output. Each value must be a valid `querySelectorAll` selector. This can be used to remove ads, scripts, or other unwanted sections.
jsRenderingNoEnable JavaScript rendering.
extractLinksNoExtract links from the page.
extractRulesNoRules for extracting specific data from the page. For example: `{ "title": "h1", "link_href": "a#link @href", "page_text": "body" }`
outputFormatNoThe outputFormat parameter specifies the desired response format: `html`, `text`, `markdown`, or `json`. If only one of `html`, `text`, or `markdown` is provided, the API returns the response in that format. If multiple formats are specified, the API returns a JSON response with keys for each requested format. If `json` is included with any other format, the API returns a JSON response with keys for the other specified formats.
proxyCountryNoOptional proxy country code.
extractEmailsNoExtract emails from the page.
aiExtractRulesNoDefines custom rules for AI-based data extraction using LLMs. This enables the system to extract structured data directly from the HTML of the page. Each key in the object represents a desired output field name, and the value specifies its type and optional description to guide the AI. Supported types: - `string`: plain text value - `number`: numeric value - `boolean`: true/false - `list`: an array of values - `item`: a nested object with its own structure defined under `output`
blockResourcesNoWhether to block loading of resources like images and stylesheets.
includeOnlyTagsNoThe `includeOnlyTags` parameter accepts an array of valid CSS selectors. When specified, only the elements matching these selectors will be included in the response content. Each value must be a valid `querySelectorAll` selector. Useful for extracting specific parts of the document.
removeBase64ImagesNoIf set to `true`, any images embedded as base64-encoded strings will be removed from the output. Useful for reducing response size or when base64 images are not needed.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses key behaviors: proxy usage, JS rendering, resource blocking, screenshots, output formats, and data extraction. However, it omits potential side effects like rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single dense paragraph that efficiently conveys comprehensive information. It could be more structured (e.g., bullet points) for easier parsing, but it remains relatively concise given the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers main use cases and output types but lacks details on error handling or limits. Given the tool's 20 parameters and advanced features, it provides sufficient context for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description provides a high-level overview and some extra context for complex parameters like jsScenario and aiExtractRules, but does not significantly enhance per-parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is a 'Universal web scraper' and explicitly differentiates it from sibling tools by recommending it as a fallback for sites without a dedicated API.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies use cases such as fallback for sites without an API, JS-heavy SPAs, bypassing bot protections, and capturing screenshots. It implicitly suggests using dedicated scrapers when available, but lacks explicit 'when not to use' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_yellowpages_place_getPlaceDetailsyellowpages_place: GET /AInspect

Get Yellow Pages Place Details

Scrapes a single YellowPages business listing URL and returns business name, full address, phone, website, categories, years in business, hours of operation, ratings, review counts, photos, and service descriptions. Use to hydrate a lead with verified NAP data, build a B2B contact database from YellowPages URLs collected via the Search endpoint, or validate business legitimacy and hours before outreach.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe YellowPages URL of the place.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It states the tool 'scrapes' data, implying read-only behavior, and lists the data returned. However, it does not disclose potential issues like rate limits, authentication requirements, or whether the scraping might be blocked. The description is adequate but lacks deeper behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with the tool's name and action. It uses two short paragraphs: first listing the returned fields, then providing use cases. Every sentence adds value, and there is no redundant or extraneous information. Excellent structure for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a single parameter and no output schema, the description reasonably covers the tool's purpose, input, and output by listing many returned fields. It also explains when to use it. However, it could be slightly more specific about the output format (e.g., structure of the response) but is sufficiently complete for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with one parameter 'url' described as 'The YellowPages URL of the place.' The description adds minimal additional meaning beyond confirming it is a single URL. Since the schema already documents the parameter, the description provides no extra semantic value, warranting a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Get Yellow Pages Place Details' and specifies it scrapes a single business listing URL, returning a comprehensive set of fields. This distinguishes it from the sibling 'hasdata_yellowpages_search_getSearchResults' which presumably collects URLs, making the tool's specific purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases such as hydrating leads, building a B2B database from URLs collected via the Search endpoint, and validating business legitimacy. It implies when to use (after obtaining a URL from Search) but does not explicitly state when not to use or mention alternatives. The guidance is clear and useful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_yellowpages_search_getSearchResultsyellowpages_search: GET /AInspect

Get YellowPages Search Results

Runs a YellowPages business search by keyword plus location with sort (default, distance, averageRating, name), country domain targeting, and page-based pagination. Returns each business with name, listing URL, phone, address, categories, rating, review count, and years in business. Use for B2B lead generation by niche and city, feeding the resulting URLs into the YellowPages Place endpoint for enrichment, or building geo-targeted prospect lists for sales outreach.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoThe page number of the results to retrieve.
sortNoThe sorting option for the search results.
domainNoYellowPages domain to use. Default is `www.yellowpages.com`.
keywordYesThe search term for which to get the search results.
locationYesThe location where to search for businesses with the given keyword.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description must disclose behavior. It states search scope, return fields, sort options, domain targeting, and page-based pagination. It lacks details on rate limits, auth requirements, or result count per page, but provides adequate transparency for a typical search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two clear sentences with a use case sentence appended. Front-loaded with purpose. No fluff, though the use case sentence could be integrated to be more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no output schema, the description lists key return fields (name, URL, phone, etc.) and mentions pagination. It adequately covers the core functionality and ties to the sibling place endpoint for enrichment.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds context for sort enum, domain default, and page pagination, but does not add significant new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it runs a YellowPages business search by keyword and location, and lists return fields. It explicitly distinguishes from the sibling `hasdata_yellowpages_place_getPlaceDetails` by mentioning feeding URLs into that endpoint for enrichment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases like B2B lead generation and geo-targeted list building. However, it does not specify when not to use or mention alternatives beyond the sibling place endpoint.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_yelp_place_getPlaceDetailsyelp_place: GET /AInspect

Get Yelp Place Details

Fetches a single Yelp business by Yelp ID or alias with domain targeting. Returns name, address, phone, website, price range, categories, overall rating, review count, hours, amenities, photos, and highlighted reviews. Use to enrich leads or listings with verified Yelp metadata, monitor a competitor's rating and review count over time, or validate hours/amenities before displaying venue details to end users.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainNoYelp domain to use. Default is `www.yelp.com`.
placeIdYesThe Yelp ID or Yelp Alias of the place. For example, 'jPIZ3FR5LNcwPuUHi2Fe4g' or 'mcdonalds-new-york-386'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description correctly implies a read-only operation ('fetches'). Lists return fields for transparency. Lacks details on error handling or rate limits, but is adequate for a simple fetch.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Very concise, well-structured, front-loaded with main action, and no unnecessary words. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description lists all major return fields. For a single-entity fetch, it is sufficiently complete. Could mention response format but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds value by explaining placeId can be ID or alias with examples, and domain includes default mention, adding context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fetches a single Yelp business by ID or alias with domain targeting. It distinguishes from search tools by specifying 'single' and listing detailed return fields.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases like enriching leads and monitoring competitors. Does not explicitly state when not to use or mention alternatives like the sibling search tool, but the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_yelp_search_getSearchResultsyelp_search: GET /AInspect

Get Yelp Search Results

Runs a Yelp business search by keyword and location with optional map-bounded radius via the l parameter (g:lon1,lat1,lon2,lat2), domain targeting, and offset-based pagination. Returns a ranked list of businesses with Yelp alias/ID, name, categories, rating, review count, price tier, neighborhood, and thumbnail. Use the returned aliases as input to the Yelp Place endpoint for full details, to power local-discovery UIs, or to build market-share/competitor datasets for a niche in a given geography.

ParametersJSON Schema
NameRequiredDescriptionDefault
lNoParameter defines the distance or map radius for the search results. For example: `g:-95.2486,29.8496,-95.4277,29.6324`.
startNoResult offset for pagination (e.g., 0 for the first page, 10 for the 2nd page, etc.).
domainNoYelp domain to use. Default is `www.yelp.com`.
keywordYesThe search term for which to get the search results.
locationYesThe location where to search for businesses with the given keyword.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. It details return fields, pagination, and map radius behavior. Missing information on rate limits or authentication, but for a search tool, it is reasonably transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise, well-structured description: first sentence states purpose, subsequent sentences add detail on parameters and use cases. No wasted words; every sentence contributes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description covers returned fields and pagination. It provides sufficient context for an agent to use the tool effectively, though could mention result limits or data freshness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds value by explaining the 'l' parameter format (g:lon1,lat1,lon2,lat2), offset pagination for 'start', and domain targeting. Enhances understanding beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves Yelp business search results by keyword and location, distinguishing it from sibling tools like hasdata_yelp_place_getPlaceDetails. Uses specific verb 'Get' and enumerates use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explains when to use the tool (search by keyword/location, pagination, map radius) and suggests using returned aliases for the Place endpoint. Lacks explicit when-not-to-use or alternatives, but provides solid contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_zillow_listing_getRealEstateListingszillow_listing: GET /AInspect

Get Zillow Real Estate Listings

Searches Zillow for-sale, for-rent, and sold listings by keyword with rich filters (price, beds, baths, home type, year built, lot/square footage, HOA, listing status, amenities, views, pet policy, days on Zillow) and pagination. Returns each listing with address, Zillow URL/zpid, price, Zestimate, beds/baths, sqft, home type, status, days on Zillow, coordinates, thumbnail, and listing agent. Use for real-estate market dashboards, rental pricing analysis, agent lead lists, inventory tracking, and collecting URLs for the Zillow Property endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
hoaNoThe Homeowners Association (HOA) fee.
pageNoThe page number of the results to retrieve.
sortNoThe sorting option for the search results.
typeYesThe type of listing.
pets[]NoAn array of pet options.
keywordYesThe keyword used to search for listings.
tours[]NoAn array of tour options.
views[]NoAn array of views.
keywordsNoAdditional keywords to refine the search.
beds[max]NoThe maximum number of bedrooms.
beds[min]NoThe minimum number of bedrooms.
basement[]NoAn array of basement options.
baths[max]NoThe maximum number of bathrooms.
baths[min]NoThe minimum number of bathrooms.
moveInDateNoThe desired move-in date.
price[max]NoThe maximum price of the listing.
price[min]NoThe minimum price of the listing.
homeTypes[]NoAn array of home types to filter the listings.
listingTypeNoThe category of the listing.
daysOnZillowNoThe number of days a listing has been on Zillow.
lotSize[max]NoThe maximum lot size.
lotSize[min]NoThe minimum lot size.
mustHaveGarageNoIf set to true, only listings with a garage will be included.
yearBuilt[max]NoThe maximum year the property was built.
yearBuilt[min]NoThe minimum year the property was built.
parkingSpotsMinNoThe minimum number of parking spots.
singleStoryOnlyNoIf set to true, only single-story properties will be included.
squareFeet[max]NoThe maximum square footage.
squareFeet[min]NoThe minimum square footage.
otherAmenities[]NoAn array of other amenities.
propertyStatus[]NoAn array of property statuses.
hide55plusCommunitiesNoIf set to true, 55+ communities will be excluded.
listingPublishOptions[]NoAn array of listing publish options.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must cover behavioral traits. It mentions pagination (page parameter) and return fields but omits rate limits, authentication requirements, or any side effects. Adequate but not fully transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with a clear title and summary. Information is front-loaded. Slightly verbose with enumeration of filters, but overall concise enough for the complexity of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 33 parameters and no output schema, the description covers the return fields and typical use cases. Lacks details on pagination metadata or total results, but sufficient for many agents.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3. The description adds value by summarizing filter categories (price, beds, etc.) and mentioning pagination, helping agents understand available options beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves Zillow real estate listings and specifies the search and filter capabilities. It lists return fields but does not distinguish from similar sibling tools like Redfin listings, missing a clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides usage examples (dashboards, agent leads, etc.) but does not indicate when not to use it or contrast with alternatives. No explicit exclusion criteria or guidance on choosing between Zillow and Redfin.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hasdata_zillow_property_getPropertyDetailszillow_property: GET /AInspect

Get Zillow Property Details

Fetches the full Zillow property page by URL/zpid, with optional agent email extraction. Returns address, list price, Zestimate and Rent Zestimate, price and tax history, beds/baths, living area, lot size, year built, home type, HOA, days on Zillow, listing description, features/amenities, photos, school assignments, walk/transit scores, and listing agent/broker (plus email when enabled). Use for valuation models, CMA generation, investor underwriting, rental yield analysis, and enriching buyer/seller agent assistants with authoritative property data.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the property on Zillow. Must be a valid Zillow property URL.
extractAgentEmailsNoIf enabled, attempts to extract agent email addresses from the property details. Increases the cost of the request.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses optional behavior (agent email extraction) and cost implication, but no annotations provided; description lacks details on rate limits, authentication, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise, no wasted words. Structured with a brief summary followed by detailed output list. Could be slightly more front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lists major return fields compensating for lack of output schema. Could mention response format (structured data) but sufficient for an agent to understand data returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Adds meaning beyond schema: explains email extraction increases cost. Schema coverage is 100%, so description adds useful context, especially for the optional parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it fetches Zillow property details by URL/zpid, listing returned fields. Distinct from sibling listing tools by focusing on a single property page.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions specific use cases like valuation models, CMA generation, and investor underwriting, providing context for when to use. However, does not explicitly exclude use for listings or compare to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources