Skip to main content
Glama
leehanchung

Bing Search MCP Server

by leehanchung

bing_web_search

Perform a web search via Bing API to retrieve relevant websites and general information by specifying query, result count, offset, and market code.

Instructions

Performs a web search using the Bing Search API for general information and websites.

Args:
    query: Search query (required)
    count: Number of results (1-50, default 10)
    offset: Pagination offset (default 0)
    market: Market code like en-US, en-GB, etc.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
countNo
marketNoen-US
offsetNo
queryYes

Implementation Reference

  • The core handler function for the 'bing_web_search' tool. It is registered via the @server.tool() decorator. The function signature with type annotations defines the input schema. It performs the Bing web search API call, applies rate limiting, handles errors, and formats search results as a string.
    @server.tool()
    async def bing_web_search(
        query: str, count: int = 10, offset: int = 0, market: str = "en-US"
    ) -> str:
        """Performs a web search using the Bing Search API for general information
        and websites.
    
        Args:
            query: Search query (required)
            count: Number of results (1-50, default 10)
            offset: Pagination offset (default 0)
            market: Market code like en-US, en-GB, etc.
        """
        # Get API key from environment
        api_key = os.environ.get("BING_API_KEY", "")
    
        if not api_key:
            return (
                "Error: Bing API key is not configured. Please set the "
                "BING_API_KEY environment variable."
            )
    
        try:
            check_rate_limit()
    
            headers = {
                "User-Agent": USER_AGENT,
                "Ocp-Apim-Subscription-Key": api_key,
                "Accept": "application/json",
            }
    
            params = {
                "q": query,
                "count": min(count, 50),  # Bing limits to 50 results max
                "offset": offset,
                "mkt": market,
                "responseFilter": "Webpages",
            }
    
            search_url = f"{BING_API_URL}v7.0/search"
    
            async with httpx.AsyncClient() as client:
                response = await client.get(
                    search_url, headers=headers, params=params, timeout=10.0
                )
    
                response.raise_for_status()
                data = response.json()
    
                if "webPages" not in data:
                    return "No results found."
    
                results = []
                for result in data["webPages"]["value"]:
                    results.append(
                        f"Title: {result['name']}\n"
                        f"URL: {result['url']}\n"
                        f"Description: {result['snippet']}"
                    )
    
                return "\n\n".join(results)
    
        except httpx.HTTPError as e:
            return f"Error communicating with Bing API: {str(e)}"
        except Exception as e:
            return f"Unexpected error: {str(e)}"
  • Helper function for rate limiting, called within bing_web_search to enforce API usage limits.
    def check_rate_limit():
        """Check if we're within rate limits"""
        now = time.time()
        if now - request_count["last_reset"] > 1:
            request_count["second"] = 0
            request_count["last_reset"] = now
    
        if (
            request_count["second"] >= RATE_LIMIT["per_second"]
            or request_count["month"] >= RATE_LIMIT["per_month"]
        ):
            raise Exception("Rate limit exceeded")
    
        request_count["second"] += 1
        request_count["month"] += 1
  • The @server.tool() decorator registers the bing_web_search function as an MCP tool.
    @server.tool()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool performs a web search but doesn't mention critical aspects like rate limits, authentication needs, error handling, or response format. For a search tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by a structured list of parameters with clear explanations. Every sentence earns its place by adding value, with no redundant or verbose content. The bullet-point-like format enhances readability without wasting space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema, no annotations), the description is partially complete. It covers parameter semantics well but lacks behavioral details (e.g., rate limits, auth) and output information. For a search tool, this is adequate but leaves clear gaps that could hinder effective use by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds substantial meaning beyond the input schema, which has 0% description coverage. It explains each parameter's purpose: 'query' as the search query (required), 'count' as number of results with range and default, 'offset' for pagination with default, and 'market' as market code with examples. This compensates well for the schema's lack of descriptions, though it doesn't cover all possible nuances (e.g., market code formats beyond examples).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Performs a web search using the Bing Search API for general information and websites.' This specifies the verb ('performs a web search'), resource ('Bing Search API'), and scope ('general information and websites'). However, it doesn't explicitly differentiate from sibling tools like bing_image_search or bing_news_search, which would require a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings (bing_image_search, bing_news_search). It mentions 'general information and websites,' which implies usage for web content, but lacks explicit alternatives or exclusions. Without clear when/when-not instructions, this falls short of higher scores.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/leehanchung/bing-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server