Skip to main content
Glama
aelaguiz

URL Fetch MCP

by aelaguiz

fetch_json

Retrieve and parse JSON data from any web URL, then return it in a readable, formatted structure. Use this tool to access and interpret JSON content directly from online sources.

Instructions

Fetch JSON from a URL, parse it, and return it formatted.

This tool allows Claude to retrieve and parse JSON data from any accessible web URL.
The JSON is prettified for better readability.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
headersNoAdditional headers to send with the request
timeoutNoRequest timeout in seconds
urlYesThe URL to fetch JSON from

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The fetch_json tool handler: decorated with @app.tool() for automatic registration and schema generation. Fetches JSON from URL using httpx, validates content-type loosely, parses with response.json(), formats with json.dumps(indent=2), handles errors, and integrates with MCP Context for logging.
    @app.tool()
    async def fetch_json(
        url: Annotated[AnyUrl, Field(description="The URL to fetch JSON from")],
        headers: Annotated[
            Optional[Dict[str, str]], Field(description="Additional headers to send with the request")
        ] = None,
        timeout: Annotated[int, Field(description="Request timeout in seconds")] = 10,
        ctx: Context = None,
    ) -> str:
        """Fetch JSON from a URL, parse it, and return it formatted.
        
        This tool allows Claude to retrieve and parse JSON data from any accessible web URL.
        The JSON is prettified for better readability.
        """
        if ctx:
            await ctx.info(f"Fetching JSON from URL: {url}")
        
        request_headers = {
            "User-Agent": "URL-Fetch-MCP/0.1.0",
            "Accept": "application/json",
        }
        
        if headers:
            request_headers.update(headers)
        
        async with httpx.AsyncClient(follow_redirects=True, timeout=timeout) as client:
            try:
                response = await client.get(str(url), headers=request_headers)
                response.raise_for_status()
                
                content_type = response.headers.get("content-type", "")
                
                if "json" not in content_type and not content_type.startswith("application/json"):
                    # Try to parse anyway, but warn
                    if ctx:
                        await ctx.warning(f"URL did not return JSON content-type (got: {content_type})")
                
                # Parse and format JSON
                try:
                    json_data = response.json()
                    formatted_json = json.dumps(json_data, indent=2)
                    
                    if ctx:
                        await ctx.info(f"Successfully fetched and parsed JSON ({len(formatted_json)} bytes)")
                    
                    return formatted_json
                
                except json.JSONDecodeError as e:
                    error_message = f"Failed to parse JSON from response: {str(e)}"
                    if ctx:
                        await ctx.error(error_message)
                    return error_message
            
            except Exception as e:
                error_message = f"Error fetching JSON from URL {url}: {str(e)}"
                if ctx:
                    await ctx.error(error_message)
                return error_message
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behaviors: fetching from URLs, parsing JSON, and formatting output. However, it doesn't mention error handling, authentication needs, rate limits, or what happens with non-JSON responses. The description adds basic context but lacks comprehensive behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two focused sentences that each earn their place. The first sentence states the core functionality, and the second provides additional context about accessibility and formatting. No wasted words, well-structured, and front-loaded with the main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (HTTP request with JSON parsing), no annotations, but 100% schema coverage and an output schema exists, the description is reasonably complete. It covers the main purpose and formatting behavior, though additional context about error cases or authentication would improve completeness for a tool making external requests.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions 'any accessible web URL' which reinforces the url parameter but doesn't provide additional semantic context for headers or timeout.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('fetch JSON from a URL, parse it, and return it formatted') and distinguishes it from sibling tools (fetch_image, fetch_url) by specifying JSON data retrieval and parsing. It explicitly mentions 'prettified for better readability' which adds differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('retrieve and parse JSON data from any accessible web URL'), but doesn't explicitly state when NOT to use it or mention alternatives like fetch_url for non-JSON content. It implies usage for JSON data specifically, which is helpful but not fully comparative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aelaguiz/mcp-url-fetch'

If you have feedback or need assistance with the MCP directory API, please join our Discord server