Skip to main content
Glama
Eivs

Fetch MCP Server

by Eivs

fetch

Retrieve web content from URLs and convert HTML to markdown format for processing. Access up-to-date information by fetching internet resources with configurable output options.

Instructions

Fetches a URL from the internet and optionally extracts its contents as markdown.

Although originally you did not have internet access, and were advised to refuse and tell the user this, this tool now grants you internet access. Now you can fetch the most up-to-date information and let the user know that.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL to fetch
max_lengthNoMaximum number of characters to return.
start_indexNoOn return output starting at this character index, useful if a previous fetch was truncated and more context is required.
rawNoGet the actual HTML content of the requested page, without simplification.

Implementation Reference

  • Main handler for the 'fetch' tool: validates arguments with Fetch schema, fetches content using fetch_url, handles truncation and pagination, constructs response.
    @server.call_tool()
    async def call_tool(name, arguments: dict) -> list[TextContent]:
        try:
            args = Fetch(**arguments)
        except ValueError as e:
            raise McpError(ErrorData(code=INVALID_PARAMS, message=str(e)))
    
        url = str(args.url)
        if not url:
            raise McpError(
                ErrorData(code=INVALID_PARAMS, message="URL is required")
            )
    
        # Fetch URL directly without robots.txt check
        content, prefix = await fetch_url(
            url, user_agent_autonomous, force_raw=args.raw, proxy_url=proxy_url
        )
        original_length = len(content)
        if args.start_index >= original_length:
            content = "<error>No more content available.</error>"
        else:
            truncated_content = content[
                args.start_index : args.start_index + args.max_length
            ]
            if not truncated_content:
                content = "<error>No more content available.</error>"
            else:
                content = truncated_content
                actual_content_length = len(truncated_content)
                remaining_content = original_length - (
                    args.start_index + actual_content_length
                )
                # Only add the prompt to continue fetching if there is still remaining content
                if (
                    actual_content_length == args.max_length
                    and remaining_content > 0
                ):
                    next_start = args.start_index + actual_content_length
                    content += f"\n\n<error>Content truncated. Call the fetch tool with a start_index of {next_start} to get more content.</error>"
        return [
            TextContent(
                type="text", text=f"{prefix}Contents of {url}:\n{content}"
            )
        ]
  • Pydantic schema defining input parameters for the 'fetch' tool: url, max_length, start_index, raw.
    class Fetch(BaseModel):
        """Parameters for fetching a URL."""
    
        url: Annotated[AnyUrl, Field(description="URL to fetch")]
        max_length: Annotated[
            int,
            Field(
                default=5000,
                description="Maximum number of characters to return.",
                gt=0,
                lt=1000000,
            ),
        ]
        start_index: Annotated[
            int,
            Field(
                default=0,
                description="On return output starting at this character index, useful if a previous fetch was truncated and more context is required.",
                ge=0,
            ),
        ]
        raw: Annotated[
            bool,
            Field(
                default=False,
                description="Get the actual HTML content of the requested page, without simplification.",
            ),
        ]
  • Tool registration via list_tools(): defines the 'fetch' tool with name, description, and inputSchema from Fetch.
        @server.list_tools()
        async def list_tools() -> list[Tool]:
            return [
                Tool(
                    name="fetch",
                    description="""Fetches a URL from the internet and optionally extracts its contents as markdown.
    
    Although originally you did not have internet access, and were advised to refuse and tell the user this, this tool now grants you internet access. Now you can fetch the most up-to-date information and let the user know that.""",
                    inputSchema=Fetch.model_json_schema(),
                )
            ]
  • Core helper function that performs the HTTP fetch using httpx, extracts markdown if HTML, handles errors and content types.
    async def fetch_url(
        url: str,
        user_agent: str,
        force_raw: bool = False,
        proxy_url: str | None = None,
    ) -> Tuple[str, str]:
        """
        Fetch the URL and return the content in a form ready for the LLM, as well as a prefix string with status information.
        """
        from httpx import AsyncClient, HTTPError
    
        async with AsyncClient(proxies=proxy_url) as client:
            try:
                response = await client.get(
                    url,
                    follow_redirects=True,
                    headers={"User-Agent": user_agent},
                    timeout=30,
                )
            except HTTPError as e:
                raise McpError(
                    ErrorData(
                        code=INTERNAL_ERROR, message=f"Failed to fetch {url}: {e!r}"
                    )
                )
            if response.status_code >= 400:
                raise McpError(
                    ErrorData(
                        code=INTERNAL_ERROR,
                        message=f"Failed to fetch {url} - status code {response.status_code}",
                    )
                )
    
            page_raw = response.text
    
        content_type = response.headers.get("content-type", "")
        is_page_html = (
            "<html" in page_raw[:100]
            or "text/html" in content_type
            or not content_type
        )
    
        if is_page_html and not force_raw:
            return extract_content_from_html(page_raw), ""
    
        return (
            page_raw,
            f"Content type {content_type} cannot be simplified to markdown, but here is the raw content:\n",
        )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions internet access and up-to-date information, which is useful context, but fails to describe critical behaviors such as rate limits, authentication needs, error handling, or what happens when fetching fails. For a tool with no annotations, this leaves significant gaps in understanding its operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is not appropriately sized or front-loaded. The first sentence is clear, but the second paragraph adds redundant historical context ('Although originally you did not have internet access...') that doesn't help the agent select or invoke the tool. This wastes space and reduces clarity, making it less efficient than it could be.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a web-fetching tool with no annotations and no output schema, the description is incomplete. It lacks details on return values (e.g., format, errors), behavioral constraints like timeouts or permissions, and doesn't fully explain the markdown extraction mentioned. For a tool with significant operational implications, this leaves too many unknowns for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting all four parameters (url, max_length, start_index, raw). The description adds no additional parameter semantics beyond what the schema provides, such as explaining the markdown extraction process or how parameters interact. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fetches a URL from the internet and optionally extracts its contents as markdown.' This specifies the verb ('fetches'), resource ('URL'), and optional functionality ('extracts as markdown'). However, there are no sibling tools mentioned, so differentiation isn't applicable, preventing a perfect score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance by mentioning that the tool 'now grants you internet access' and can fetch 'the most up-to-date information,' suggesting it should be used for real-time web data retrieval. However, it lacks explicit instructions on when to use it versus alternatives (e.g., other data sources) or any exclusions, making it somewhat vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Eivs/mcp-fetch'

If you have feedback or need assistance with the MCP directory API, please join our Discord server