google-surf-mcp
This server provides Google search and web content extraction without requiring an API key, using a persistent Chrome profile with stealth capabilities.
search(query, limit?)– Run a single Google search and get title, URL, and snippet for up to 20 results.search_parallel(queries[], limit?)– Execute up to 10 queries simultaneously using a pool of 4 concurrent workers, returning title/URL/snippet per result.extract(url, max_chars?)– Fetch a URL and return clean article content as Markdown (via Mozilla Readability), with optional truncation up to 50,000 characters.search_extract(query, limit?, max_chars?)– Combines search and extraction in one call: performs a Google search and extracts full article Markdown from each result page in parallel (up to 10 results, up to 20,000 chars each), replacing the typical two-step search + fetch workflow.
Key highlights:
No API key, proxies, or external solvers required
Automatic CAPTCHA recovery: opens a visible Chrome window for a human to solve, then retries automatically
Blocks images/media/fonts for faster performance
Designed for local use with a persistent, warm Chrome profile
Provides tools for searching Google and extracting web content without API keys, including search, parallel search, URL extraction, and combined search+extract.
google-surf-mcp
✨Anti-Bot Search MCP: No API Key✨
English | 한국어

Demo only. Actual searches run headless by default (no visible browser). Set
SURF_HEADLESS=falseto make Chrome visible like in the clip above.
Google search MCP. No API key. Just works.
✅ Actually works (tested 6 free Google search MCPs, all failed)
✅ Search + URL extract in one MCP (replaces the usual search MCP + fetch MCP combo)
✅ 4 tools:
search/search_parallel/extract/search_extract✅ No API key, no proxies, no solver
✅ Auto CAPTCHA recovery (Chrome opens, human solves once, call retries)
✅ SSRF guard on
extract(blockslocalhost, private IPs, AWS metadata by default)
What
Plug it into any MCP client and you get Google search as a tool.
No CAPTCHA solver. When CAPTCHA fires on any tool, a Chrome window opens for a human to solve. Each solve preserves the profile's reputation with Google. Built for sustainable, ethical use.
One-time install needs a ~1s profile warm-up (see Install).
Designed for local use. Not suitable for stateless / serverless deployment.
Numbers
result | |
sequential | ~1.5s/query (first call ~4s, includes setup) |
parallel x4 | ~1.5s wall (first call ~9s, includes pool warm) |
parallel x10 | ~4.5s wall |
search_extract x5 | ~5s wall (search + 5 parallel extracts) |
Measured on a workstation with a 1Gb/s connection.
Stack
Playwright + persistent Chrome profile
playwright-extrastealthResource-blocked images / media / fonts for speed
One-shot profile bootstrap before first run
Mozilla Readability + Turndown for article extraction
Install
Requires Node 18+ and Google Chrome (or Chromium) on the system.
npx google-surf-mcp # actual MCP - register in client configOr local clone:
git clone https://github.com/HarimxChoi/google-surf-mcp
cd google-surf-mcp
npm install
npm run bootstrapbootstrap opens a Chrome window. Run one Google search in it. Close. Profile is now warm.
Override paths if needed:
CHROME_PATH=/path/to/chrome SURF_TZ=America/New_York npm run bootstrapUse with Claude Code
Paste this into your ~/.claude.json:
{
"mcpServers": {
"google-surf": {
"command": "npx",
"args": ["-y", "google-surf-mcp"]
}
}
}Restart Claude Code. Done. search, search_parallel, extract, search_extract are now available.
For other MCP clients, use the same JSON shape in their config file.
Local clone variant:
{
"mcpServers": {
"google-surf": {
"command": "node",
"args": ["/abs/path/to/google-surf-mcp/build/index.js"]
}
}
}Tools
search(query, limit?)- single query, ~1.5s. Returns title / url / snippet. Sponsored ads filtered out.search_parallel(queries[], limit?)- pool of 4, max 10 queries per call.extract(url, max_chars?)- fetch a URL, return article markdown (Readability with text fallback). Failures return{ error }, never throw.search_extract(query, limit?, max_chars?)- search + parallel extract in one call. Returns SERP results enriched with full article content. Per-page failures are isolated.
search_extract is the killer one: SERP + full article content in a single call. Replaces the usual "search MCP + URL fetcher MCP" combo most agents stitch together.
Env vars
var | default | notes |
| auto-detected | absolute path to Chrome binary |
|
| where the warm profile lives |
|
| browser locale |
| system tz | e.g. |
|
| set |
|
| idle ms before closing the sequential ctx and pool. |
|
| set |
Troubleshooting
CAPTCHA: a visible Chrome window opens automatically (works for all 4 tools). Solve it once, do one search inside, the call retries and continues. To fail-fast instead, run with no display attached.
"Chrome not found": install Chrome or set
CHROME_PATH.Stale selectors: Google rotates classes. PRs welcome.
Changelog
See CHANGELOG.md.
License
MIT
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/HarimxChoi/google-surf-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server