AnySearch
Server Details
Unified real-time search engine skill for AI agents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 4 of 4 tools scored.
Each tool has a distinct purpose: search for general or vertical searches, batch_search for parallel independent queries, extract for fetching URL content, and list_domains for query format instructions. No overlap in functionality.
Names are lowercase with underscores, but there is a mix of single words (search, extract) and two-word patterns (batch_search, list_domains). The verb_noun pattern is present in list_domains but not consistently applied.
With 4 tools, the server is well-scoped for a search utility. Each tool earns its place, covering search, batch search, domain instructions, and page extraction without redundancy.
Core search and fetch operations are covered. Minor gaps exist, such as no direct tool for search filtering or news-specific searches, but the set handles the primary use cases effectively.
Available Tools
4 toolsbatch_searchARead-onlyDestructiveInspect
Run multiple searches in parallel and return all results merged into one response.
When to use
Use batch_search instead of multiple sequential search calls when you have 2–5 independent queries. This saves context window space by returning all results in a single tool call.
Constraints
Maximum 5 queries per call
Each query item has the same structure as the search tool parameters
Queries run in parallel; a single query failure does not block others
Example
Instead of: search(query=A) → search(query=B) → search(query=C) Use: batch_search(queries=[{query:A,...}, {query:B,...}, {query:C,...}])
| Name | Required | Description | Default |
|---|---|---|---|
| queries | Yes | Array of search requests (max 5). Each item follows the search tool schema: query is required; domain+sub_domain are optional (omit for general web search, required for vertical search). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds value by explaining parallel execution and fault tolerance. However, there is a contradiction with annotations: annotations set destructiveHint=true but the tool is read-only (searches). The description does not address this inconsistency, creating ambiguity about whether the tool has destructive side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (When to use, Constraints, Example). No wasted words. The example effectively illustrates the use case in minimal space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a batch tool with no output schema, the description explains results are merged into one response and provides constraints. However, it lacks specifics on the merged response format (e.g., structure, ordering). Additional detail would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a description for the 'queries' field. The description adds further semantics: each item follows the search tool schema, with query required and domain+sub_domain optional. The example reinforces usage. While schema already covers structure, the description clarifies relationship to sibling tool parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it runs multiple searches in parallel and merges results into one response. It distinguishes itself from the sibling 'search' tool by specifying it's for 2-5 independent queries to save context window space.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use: instead of multiple sequential search calls. Also lists constraints: maximum 5 queries per call, parallel execution, and that a single query failure does not block others. Provides a concrete example contrasting sequential vs. batch.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extractARead-onlyDestructiveInspect
Fetch a URL and return its full content as clean Markdown.
When to use — call extract after search whenever:
The search snippet is too short or truncated to answer the question
User asks to 'read', 'open', 'summarize', or 'get details from' a specific URL
You need to verify a specific claim, statistic, or fact from the original source
The result points to a full article, report, documentation page, or paper worth reading in full
The answer requires data visible only in the page body (tables, sections, code blocks not captured in snippet)
User provides a URL directly and asks about its content
When NOT to use
The search snippet already contains a complete, sufficient answer
You only need the URL or title (not the page body)
Constraints
url must start with http:// or https://
Only HTML pages are supported; PDF/binary files will return an error
Content is truncated at 50,000 characters
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The page URL to fetch. Must start with http:// or https://. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations include destructiveHint=true, which contradicts the description's clear read-only implication ('Fetch a URL and return its content'). The description does not disclose any destructive behavior, leading to a severe inconsistency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (when to use, when not, constraints). It is concise without being overly terse, though could be slightly more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a simple tool with one parameter and no output schema, the description covers purpose, usage guidelines, constraints, and limitations (truncation, supported formats). Sufficient for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% but description adds practical constraints beyond schema: URL must start with http/https, only HTML supported, truncation at 50k characters. This provides useful context for correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Fetch') and resource ('URL content as clean Markdown'), clearly distinguishing itself from sibling tools like 'search' (snippets) and 'list_domains' (domains).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides detailed when-to-use scenarios (after search if snippet insufficient, user asks to read/summarize, verify claims) and explicit when-not-to-use conditions (snippet sufficient, only need URL/title).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_domainsARead-onlyDestructiveInspect
Call this before search to get the sub-domain catalog and MANDATORY query format rules for a given domain.
When to call — pick the domain that matches what the user is asking about:
finance → stocks, ticker, ETF, forex, exchange rate, currency, commodities, oil/gold price, crypto, Bitcoin, earnings, SEC filing, IPO, bond yield, financial news academic → paper, research, journal, thesis, citation, DOI, abstract, peer review, arxiv, pubmed, scholar, literature, study, dataset ip → patent, prior art, invention, CPC, IPC, EPO, WIPO, assignee, inventor, trademark legal → law, statute, regulation, case, ruling, judgment, court, legislation, compliance, civil code, criminal code travel → flight, airline, departure, arrival, delay, airport, IATA, POI, attraction, hotel, itinerary, travel guide, visa gaming → game, Steam, price, discount, esports, player stats, rank, champion, LOL, DOTA, CS2 security → malware, virus, CVE, vulnerability, IP reputation, threat, IOC, hash, VirusTotal, OSINT, phishing, ransomware geo → address, coordinates, geocode, POI, nearby, restaurant, walkability, transit score environment → weather, forecast, AQI, air quality, PM2.5, satellite, NDVI, carbon, emission, agriculture energy → electricity price, power grid, oil price, gas price, energy market business → job, hiring, salary, company contact, B2B, lead, recruiter, HR code → library docs, API reference, npm, pip, cargo, code snippet, function, repo search health → clinical trial, diagnosis, drug, symptom, medical literature, WHO stats, psychology education → course, lecture, textbook, tutorial video, MOOC, open courseware tech → product specs, barcode, HackerNews, ProductHunt, tech review ecommerce → price comparison, Walmart, shopping, product search film → movie, TV show, anime, torrent, streaming music → album, artist, lyrics, music torrent fashion → cosmetics ingredients, beauty, trend, streetwear release home → recipe, repair guide, food safety, walkability, appliance religion → bible, quran, torah, buddhist texts, manuscripts
Returns
Markdown table filtered to the specified domain: sub_domain | description | query_format | zone
CRITICAL: How to use results
sub_domain is the PRIMARY routing key — always pass it to search
query_format column is MANDATORY — wrong format = wrong data source = wrong results Hard constraints examples: finance.us_stock requires Stock:/Forex:/News:/Commodities: prefix; security.noise requires single IPv4; geo.weather requires city name or lat,lon
If multiple sub_domains match different aspects, make PARALLEL search calls — one per sub_domain
zone=CN → set zone="cn" in search; zone=ALL/US/EU → omit zone
Cache Rule — NEVER repeat list_domains for the same domain
Once you have called list_domains for a domain in this conversation, the result is valid for the ENTIRE session. Do NOT call list_domains again for the same domain — reuse the sub_domain and query format you already received. If you need info for multiple domains, pass them all in the domains array in a single call.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | No | Filter by a single domain. Mutually exclusive with domains array. | |
| domains | No | Batch query for multiple domains in a single call. Takes priority over domain. Each item must be a valid domain value. Max 5. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description depicts a read-only operation that returns catalog data with no side effects. However, the annotation destructiveHint=true contradicts this by suggesting the tool may have destructive effects. The description does not disclose any destructive behavior, and the contradiction undermines transparency. This is a serious inconsistency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (When to call, Returns, CRITICAL, Cache Rule), uses bold and lists for readability, and is front-loaded with the most important information. Every sentence adds value, and the lengthy domain list is justified by its utility. It is efficient despite its length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description fully explains the return format (Markdown table columns: sub_domain, description, query_format, zone). It covers all essential aspects: how to call, what you get, how to interpret and use the results, caching rules, and common pitfalls. The tool is not overly complex, and the description is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds significant value by providing a detailed mapping from each domain enum value to example topics and use cases, helping the agent choose the correct value. It also explains the mutual exclusivity and priority of 'domain' vs 'domains' and the max limit of 5. This goes well beyond the schema's minimal descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Call this before search to get the sub-domain catalog and MANDATORY query format rules for a given domain.' It specifies the action (get catalog and rules), the resource (sub-domains, query formats), and distinguishes from siblings (search, batch_search, extract) by positioning it as a preparatory step. The extensive list of domains and associated topics further clarifies the scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to call (before search), how to pick the domain based on user query, how to use the results (pass sub_domain to search, obey query_format, parallel calls for multiple sub_domains, handle zone), and what not to do (never repeat list_domains for same domain). It also includes hard constraint examples and caching rules, making the usage crystal clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchARead-onlyDestructiveInspect
Execute a search and return ranked Markdown results (title, URL, snippet).
Two modes
Mode 1 — General web search (no list_domains needed)
Omit domain and sub_domain entirely. Use when the query is open-ended and does not target a specific structured data source. Example: search(query="what is quantum computing")
Mode 2 — Vertical search (call list_domains first)
Use when the query targets a specific domain: stocks, patents, flights, CVEs, weather, academic papers, etc. Steps:
Call list_domains to get the sub_domain and mandatory query format for the target domain.
Pass domain + sub_domain from list_domains output. Never guess them.
Format query exactly as specified in the query_format column — wrong format = wrong results.
Decision rule — which mode to use
Use Mode 2 (vertical) when ANY of these apply:
Query involves a ticker, DOI, CVE, IATA code, patent number, address, or other structured identifier
Query targets a specific vertical: finance, legal, academic, travel, security, geo, environment, etc.
User asks for real-time or specialized data (stock price, weather, flight status, drug info, etc.) Use Mode 1 (general) when the query is purely conversational or open-ended with no structured lookup.
After getting results — when to call extract
Search returns titles + snippets only. Call extract when:
The snippet is truncated or insufficient to answer the question
User asks to read, summarize, or get details from a specific URL
You need to verify a claim or fact from the source page
The answer requires data only visible in the page body (tables, sections not in snippet)
Query decomposition
One intent per search call. For 2–5 independent queries use batch_search instead. WRONG: search(query="AAPL price and earnings and analyst rating") RIGHT: batch_search(queries=[{query:"AAPL price",...}, {query:"AAPL earnings",...}])
| Name | Required | Description | Default |
|---|---|---|---|
| zone | No | Geographic zone: cn (mainland China) or intl (international). Required when the zone column in list_domains output is CN. | |
| query | Yes | Search query with ONE intent only. For vertical search, format MUST follow the query_format column from list_domains. | |
| domain | No | Vertical domain from list_domains. Omit for general web search. | |
| freshness | No | Recency filter: day (past 24h), week (past 7d), month (past 30d), year (past 365d). | |
| sub_domain | No | Vertical sub-domain from list_domains (e.g. finance.us_stock). Required for vertical search; omit for general web search. | |
| max_results | No | Number of results to return. Default 10, max 100. | |
| content_types | No | Filter results by content type. Omit to return all types. | |
| sub_domain_params | No | Additional structured parameters for the sub_domain. Fields are defined by the params_schema column returned by list_domains. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explains return format and mode behavior but does not clarify the contradictory annotations (readOnlyHint=true vs destructiveHint=true). While it adds context beyond annotations, the lack of resolution on side effects reduces transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with headers and examples, but slightly verbose. Every sentence adds value, though some redundancy exists. Could be more concise without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers all aspects: two modes, prerequisites (list_domains), post-processing (extract), query decomposition, and return format. Given the complexity and no output schema, the description is thoroughly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds significant value by explaining domain/sub_domain from list_domains, query format rules, and the role of sub_domain_params. Goes well beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it executes a search and returns Markdown results. Distinguishes from siblings like batch_search, extract, and list_domains, so the agent knows exactly which tool to pick.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly describes two modes with decision rules, when to call list_domains first, when to use extract, and warns against combining multiple intents in one query. Provides concrete examples.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!