Skip to main content
Glama

Server Details

Hosted Crawleo MCP (remote streamable HTTP endpoint).

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Crawleo/Crawleo-MCP
GitHub Stars
10
Server Listing
crawleo-mcp

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down

Available Tools

2 tools
crawl_webInspect

Crawl a webpage and extract its content in markdown format optimized for LLMs.

IMPORTANT - Use defaults for best results:

  • Keep default settings unless you have a specific reason to change them

  • Defaults are optimized for accuracy, speed, and cost efficiency

Credits Cost & Speed Guide:

  • HTTP request (render_js=false): 1 credit per URL (fastest, ~2-5 seconds)

  • Browser rendering (render_js=true): 10 credits per URL (~5-15 seconds)

  • Screenshot: only available with render_js=true

When to change defaults:

  • render_js: Enable for JavaScript-heavy pages (SPAs, dynamic content)

  • screenshot: Only if you need visual verification (requires render_js=true)

  • geolocation: Set country code for geo-restricted content

  • Output defaults (enhanced_html + markdown) are optimized for LLM consumption

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to crawl and extract content from (required)
markdownNoReturn markdown conversion (recommended for LLMs). Default: true.
raw_htmlNoReturn the raw HTML as-is. Default: false.
page_textNoReturn plain-text extraction. Default: false.
render_jsNoTrue = browser rendering (10 credits/URL), False = HTTP request (1 credit/URL). Default: false.
screenshotNoCapture a screenshot (only with render_js=true). Default: false.
geolocationNoISO 3166-1 alpha-2 country code for geolocation, e.g. "us", "gb". Optional.
enhanced_htmlNoReturn cleaned/sanitized HTML. Default: true.
screenshot_full_pageNoFull-page screenshot vs viewport only. Default: false.
search_webInspect

Search the web using Crawleo AI-powered search engine. Returns search results in markdown format optimized for LLMs.

Today's date is 2026-03-25. Do NOT append the year to search queries — it degrades result quality.

IMPORTANT - Use defaults for best results:

  • Keep default settings unless you have a specific reason to change them

  • Defaults are optimized for accuracy, speed, and cost efficiency

Credits Cost & Speed Guide:

  • Base search (1 page): 1 credit (~0.5-1 seconds)

  • Each additional page (max_pages): +1 credit per page (+0.5 seconds each)

  • With auto_crawling enabled: +1 credit per result crawled (+5-10 seconds total)

When to change defaults:

  • max_pages: Increase only if you need more results (adds cost)

  • auto_crawling: Enable only if you need full page content, not just snippets (significantly slower)

  • device/geolocation: Change only for region/device-specific results

  • markdown: Keep true for LLM-optimized output

ParametersJSON Schema
NameRequiredDescriptionDefault
ccNoCountry code for search (optional). Use for country-specific results.
queryYesThe search query to look up (required)
deviceNoDevice type for search. Default: "desktop". Change only for device-specific results.desktop
setLangNoLanguage code for results. Default: "en". Change only for non-English content.en
markdownNoGet markdown output. Default: true (recommended for LLMs). Keep enabled.
raw_htmlNoGet raw HTML content (default: false)
max_pagesNoMaximum result pages to fetch. Default: 1. Each page adds +1 credit. Keep at 1 unless you need extensive results.
page_textNoGet plain text content (default: false)
geolocationNoGeolocation for results. Default: "us". Change only for geo-specific results.us
auto_crawlingNoAuto-crawl each result for full content. Default: false. WARNING: Significantly increases credits and time. Enable only when snippets are insufficient.
enhanced_htmlNoGet AI-enhanced, cleaned HTML content (default: false for LLM optimization)

Verify Ownership

Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:

{
  "$schema": "https://glama.ai/mcp/schemas/connector.json",
  "maintainers": [
    {
      "email": "your-email@example.com"
    }
  ]
}

The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.

Sign in to verify ownership

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.