Skip to main content
Glama
bch1212

agentfetch-mcp

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
REDIS_URLNoRedis URL for caching. Without Redis, fetches run uncached.
JINA_API_KEYNoJina Reader API key. Free tier covers ~1M tokens/mo. Without it, only Trafilatura works.
CACHE_TTL_SECONDSNoCache TTL for fetch results (default 6 hours).21600
FIRECRAWL_API_KEYNoFireCrawl API key. Needed for JS-heavy domains (Twitter, LinkedIn, Notion). 500 free credits on signup.

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
fetch_urlA

Fetch any URL and return clean, LLM-ready Markdown with token count, metadata, and 6h caching.

WHEN TO USE:

  • You have a specific URL whose content you need.

  • You want to cap response size to stay inside your context window.

  • You want repeat fetches to be cheap (cache hits ≈ $0.0001).

  • The URL might be JS-rendered, a PDF, or behind a paywall — this tool auto-routes to the right fetcher (Trafilatura → Jina → FireCrawl → PDF).

WHEN NOT TO USE:

  • You don't know which URL to fetch — use search_and_fetch instead.

  • You have many URLs to fetch — use fetch_multiple instead.

Args: url: The URL to fetch. max_tokens: Hard cap on response size. Default unlimited. Pass this if you're tight on context budget — cheaper than over-fetching. format: "markdown" (default — recommended), "text", or "json". use_cache: True returns a cached copy if one exists (≤6h old). Pass False only when freshness matters (live news, prices).

Returns: { "url": str, "success": bool, "markdown": str, "metadata": {title, author, published_date, domain, word_count, token_count, reading_time_seconds, content_type, language}, "cache": {hit, cached_at, expires_at}, "fetch_info": {fetcher_used, fetch_time_ms, cost_credits}, "error": str | None }

estimate_tokensA

Estimate token count of a URL's content WITHOUT fetching the body.

WHEN TO USE:

  • You're considering fetching a URL but unsure if it fits your remaining context window. This call is ~10x cheaper than a full fetch.

  • You want to triage a list of candidate URLs before deciding which to actually retrieve.

IMPORTANT: Many servers omit Content-Length on dynamic / chunked responses. When that happens, this tool returns confident=false and estimated_tokens=null. In that case, call fetch_url with a max_tokens cap instead of trusting the estimate.

Args: url: The URL to estimate.

Returns: { "url": str, "success": bool, "estimated_tokens": int | null, "byte_size": int | null, "content_type": str, "confident": bool, "note": str }

fetch_multipleA

Fetch up to 20 URLs concurrently. Each result is the same shape as fetch_url.

WHEN TO USE:

  • You have a list of URLs (search results, links from a doc, sitemap) and want them retrieved in parallel rather than one at a time.

Args: urls: 1–20 URLs. Larger batches: split into multiple calls. max_tokens_each: Per-result cap. Apply this to keep total response inside your context budget — total ≈ len(urls) * max_tokens_each. use_cache: True for cache-aware fetching (default).

Returns: {"count": int, "results": [<fetch_url shape>, ...]}

search_and_fetchA

Web search + fetch top results in one call.

WHEN TO USE:

  • You have a research question, not specific URLs. E.g. "what's the latest on X", "find docs for Y library", "recent news about Z".

  • You'd otherwise have to call a search tool, parse results, then call fetch — this collapses that into one round-trip.

Args: query: Search query (2–500 chars). num_results: Top N to fetch (1–10, default 3). max_tokens_each: Per-result cap (default 2000).

Returns: {"query": str, "count": int, "results": [<fetch_url shape>, ...]}

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bch1212/agentfetch-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server