Skip to main content
Glama

AnySearch

Server Details

Unified real-time search engine skill for AI agents.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: search for general or vertical searches, batch_search for parallel independent queries, extract for fetching URL content, and list_domains for query format instructions. No overlap in functionality.

Naming Consistency4/5

Names are lowercase with underscores, but there is a mix of single words (search, extract) and two-word patterns (batch_search, list_domains). The verb_noun pattern is present in list_domains but not consistently applied.

Tool Count5/5

With 4 tools, the server is well-scoped for a search utility. Each tool earns its place, covering search, batch search, domain instructions, and page extraction without redundancy.

Completeness4/5

Core search and fetch operations are covered. Minor gaps exist, such as no direct tool for search filtering or news-specific searches, but the set handles the primary use cases effectively.

Available Tools

4 tools
extractA
Read-onlyDestructive
Inspect

Fetch a URL and return its full content as clean Markdown.

When to use — call extract after search whenever:

  • The search snippet is too short or truncated to answer the question

  • User asks to 'read', 'open', 'summarize', or 'get details from' a specific URL

  • You need to verify a specific claim, statistic, or fact from the original source

  • The result points to a full article, report, documentation page, or paper worth reading in full

  • The answer requires data visible only in the page body (tables, sections, code blocks not captured in snippet)

  • User provides a URL directly and asks about its content

When NOT to use

  • The search snippet already contains a complete, sufficient answer

  • You only need the URL or title (not the page body)

Constraints

  • url must start with http:// or https://

  • Only HTML pages are supported; PDF/binary files will return an error

  • Content is truncated at 50,000 characters

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe page URL to fetch. Must start with http:// or https://.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations include destructiveHint=true, which contradicts the description's clear read-only implication ('Fetch a URL and return its content'). The description does not disclose any destructive behavior, leading to a severe inconsistency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (when to use, when not, constraints). It is concise without being overly terse, though could be slightly more streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple tool with one parameter and no output schema, the description covers purpose, usage guidelines, constraints, and limitations (truncation, supported formats). Sufficient for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% but description adds practical constraints beyond schema: URL must start with http/https, only HTML supported, truncation at 50k characters. This provides useful context for correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Fetch') and resource ('URL content as clean Markdown'), clearly distinguishing itself from sibling tools like 'search' (snippets) and 'list_domains' (domains).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides detailed when-to-use scenarios (after search if snippet insufficient, user asks to read/summarize, verify claims) and explicit when-not-to-use conditions (snippet sufficient, only need URL/title).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_domainsA
Read-onlyDestructive
Inspect

Call this before search to get the sub-domain catalog and MANDATORY query format rules for a given domain.

When to call — pick the domain that matches what the user is asking about:

finance → stocks, ticker, ETF, forex, exchange rate, currency, commodities, oil/gold price, crypto, Bitcoin, earnings, SEC filing, IPO, bond yield, financial news academic → paper, research, journal, thesis, citation, DOI, abstract, peer review, arxiv, pubmed, scholar, literature, study, dataset ip → patent, prior art, invention, CPC, IPC, EPO, WIPO, assignee, inventor, trademark legal → law, statute, regulation, case, ruling, judgment, court, legislation, compliance, civil code, criminal code travel → flight, airline, departure, arrival, delay, airport, IATA, POI, attraction, hotel, itinerary, travel guide, visa gaming → game, Steam, price, discount, esports, player stats, rank, champion, LOL, DOTA, CS2 security → malware, virus, CVE, vulnerability, IP reputation, threat, IOC, hash, VirusTotal, OSINT, phishing, ransomware geo → address, coordinates, geocode, POI, nearby, restaurant, walkability, transit score environment → weather, forecast, AQI, air quality, PM2.5, satellite, NDVI, carbon, emission, agriculture energy → electricity price, power grid, oil price, gas price, energy market business → job, hiring, salary, company contact, B2B, lead, recruiter, HR code → library docs, API reference, npm, pip, cargo, code snippet, function, repo search health → clinical trial, diagnosis, drug, symptom, medical literature, WHO stats, psychology education → course, lecture, textbook, tutorial video, MOOC, open courseware tech → product specs, barcode, HackerNews, ProductHunt, tech review ecommerce → price comparison, Walmart, shopping, product search film → movie, TV show, anime, torrent, streaming music → album, artist, lyrics, music torrent fashion → cosmetics ingredients, beauty, trend, streetwear release home → recipe, repair guide, food safety, walkability, appliance religion → bible, quran, torah, buddhist texts, manuscripts

Returns

Markdown table filtered to the specified domain: sub_domain | description | query_format | zone

CRITICAL: How to use results

  • sub_domain is the PRIMARY routing key — always pass it to search

  • query_format column is MANDATORY — wrong format = wrong data source = wrong results Hard constraints examples: finance.us_stock requires Stock:/Forex:/News:/Commodities: prefix; security.noise requires single IPv4; geo.weather requires city name or lat,lon

  • If multiple sub_domains match different aspects, make PARALLEL search calls — one per sub_domain

  • zone=CN → set zone="cn" in search; zone=ALL/US/EU → omit zone

Cache Rule — NEVER repeat list_domains for the same domain

Once you have called list_domains for a domain in this conversation, the result is valid for the ENTIRE session. Do NOT call list_domains again for the same domain — reuse the sub_domain and query format you already received. If you need info for multiple domains, pass them all in the domains array in a single call.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainNoFilter by a single domain. Mutually exclusive with domains array.
domainsNoBatch query for multiple domains in a single call. Takes priority over domain. Each item must be a valid domain value. Max 5.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description depicts a read-only operation that returns catalog data with no side effects. However, the annotation destructiveHint=true contradicts this by suggesting the tool may have destructive effects. The description does not disclose any destructive behavior, and the contradiction undermines transparency. This is a serious inconsistency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (When to call, Returns, CRITICAL, Cache Rule), uses bold and lists for readability, and is front-loaded with the most important information. Every sentence adds value, and the lengthy domain list is justified by its utility. It is efficient despite its length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description fully explains the return format (Markdown table columns: sub_domain, description, query_format, zone). It covers all essential aspects: how to call, what you get, how to interpret and use the results, caching rules, and common pitfalls. The tool is not overly complex, and the description is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds significant value by providing a detailed mapping from each domain enum value to example topics and use cases, helping the agent choose the correct value. It also explains the mutual exclusivity and priority of 'domain' vs 'domains' and the max limit of 5. This goes well beyond the schema's minimal descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Call this before search to get the sub-domain catalog and MANDATORY query format rules for a given domain.' It specifies the action (get catalog and rules), the resource (sub-domains, query formats), and distinguishes from siblings (search, batch_search, extract) by positioning it as a preparatory step. The extensive list of domains and associated topics further clarifies the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to call (before search), how to pick the domain based on user query, how to use the results (pass sub_domain to search, obey query_format, parallel calls for multiple sub_domains, handle zone), and what not to do (never repeat list_domains for same domain). It also includes hard constraint examples and caching rules, making the usage crystal clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources