Skip to main content
Glama
HasData

hasdata-mcp

Official

bing_serp: GET /

hasdata_bing_serp_getSearchResults

Scrape Bing search results using geo targeting, market, country, safesearch, time filters, device type, and pagination. Ideal for SEO rank tracking, SERP feature monitoring, and training search agents.

Instructions

Get Bing Search Results

Fetches Bing SERPs for a query with geo targeting (location/lat/lon), market (mkt), country (cc), safesearch (off/moderate/strict), time/custom filters, device type, and pagination (first offset, count up to 50). Returns organic results (title, url, snippet, displayed url, position), related searches, answer boxes/knowledge panels, and pagination metadata. Use for SEO rank tracking, SERP feature monitoring, Bing-specific visibility audits, and training/eval data for search agents.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
qYesSpecify the search term for which you want to scrape the SERP.
locationNoDefines the search’s origin location. For realistic results, set location at the city level. If omitted, the proxy’s location may be used.
latNoGPS latitude for the search origin.
lonNoGPS longitude for the search origin.
mktNoThe two-letter country code for the country to search from.
ccNoThe two-letter country code for the country to search from.
safeNoAdult Content Filtering option.
filtersNoAllows applying various filters to narrow search results, including date-based options: - `ex1:"ez1"` – past 24 hours - `ex1:"ez2"` – past week - `ex1:"ez3"` – past month For complex filters, run a Bing search and copy the filters parameter from the URL.
deviceTypeNoSpecify the device type for the search.
firstNoThis parameter specifies the number of search results to skip and is used for implementing pagination. For example, a value of 1 (default) indicates the first page of results, 11 refers to the second page, and 21 to the third page.
countNoNumber of results per page, ranging from 1 to 50.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the output elements (organic results, related searches, answer boxes, pagination metadata) and input parameters. However, it omits potential side effects, rate limits, or authorization requirements. The description implies read-only behavior but does not confirm.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: first lists capabilities, second lists return types and use cases. It is front-loaded with the core action and contains no fluff. Every phrase is informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description covers input parameters comprehensively and explains the expected output (organic results, related searches, etc.). It could discuss error handling or edge cases but is generally complete for a search tool with many parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, meaning all parameters have descriptions. The description adds grouping context (e.g., 'geo targeting (location/lat/lon)') but does not significantly enhance meaning beyond the schema. Baseline 3 is appropriate as the description adds modest value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Get Bing Search Results' and details the function: 'Fetches Bing SERPs for a query'. It lists specific capabilities (geo targeting, market, safesearch, filters, etc.) and clearly differentiates from siblings by being Bing-specific, whereas siblings are for Google, Amazon, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases: 'Use for SEO rank tracking, SERP feature monitoring, Bing-specific visibility audits, and training/eval data for search agents.' It also enumerates parameters for geo targeting and filters, giving context. However, it does not explicitly contrast with sibling tools or state when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/HasData/hasdata-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server