agentfetch-mcp
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| REDIS_URL | No | Redis URL for caching. Without Redis, fetches run uncached. | |
| JINA_API_KEY | No | Jina Reader API key. Free tier covers ~1M tokens/mo. Without it, only Trafilatura works. | |
| CACHE_TTL_SECONDS | No | Cache TTL for fetch results (default 6 hours). | 21600 |
| FIRECRAWL_API_KEY | No | FireCrawl API key. Needed for JS-heavy domains (Twitter, LinkedIn, Notion). 500 free credits on signup. |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": false
} |
| prompts | {
"listChanged": false
} |
| resources | {
"subscribe": false,
"listChanged": false
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| fetch_urlA | Fetch any URL and return clean, LLM-ready Markdown with token count, metadata, and 6h caching. WHEN TO USE:
WHEN NOT TO USE:
Args: url: The URL to fetch. max_tokens: Hard cap on response size. Default unlimited. Pass this if you're tight on context budget — cheaper than over-fetching. format: "markdown" (default — recommended), "text", or "json". use_cache: True returns a cached copy if one exists (≤6h old). Pass False only when freshness matters (live news, prices). Returns: { "url": str, "success": bool, "markdown": str, "metadata": {title, author, published_date, domain, word_count, token_count, reading_time_seconds, content_type, language}, "cache": {hit, cached_at, expires_at}, "fetch_info": {fetcher_used, fetch_time_ms, cost_credits}, "error": str | None } |
| estimate_tokensA | Estimate token count of a URL's content WITHOUT fetching the body. WHEN TO USE:
IMPORTANT: Many servers omit Content-Length on dynamic / chunked responses. When that happens, this tool returns confident=false and estimated_tokens=null. In that case, call fetch_url with a max_tokens cap instead of trusting the estimate. Args: url: The URL to estimate. Returns: { "url": str, "success": bool, "estimated_tokens": int | null, "byte_size": int | null, "content_type": str, "confident": bool, "note": str } |
| fetch_multipleA | Fetch up to 20 URLs concurrently. Each result is the same shape as fetch_url. WHEN TO USE:
Args: urls: 1–20 URLs. Larger batches: split into multiple calls. max_tokens_each: Per-result cap. Apply this to keep total response inside your context budget — total ≈ len(urls) * max_tokens_each. use_cache: True for cache-aware fetching (default). Returns: {"count": int, "results": [<fetch_url shape>, ...]} |
| search_and_fetchA | Web search + fetch top results in one call. WHEN TO USE:
Args: query: Search query (2–500 chars). num_results: Top N to fetch (1–10, default 3). max_tokens_each: Per-result cap (default 2000). Returns: {"query": str, "count": int, "results": [<fetch_url shape>, ...]} |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/bch1212/agentfetch-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server