Skip to main content
Glama

fetch

Extracts content from URLs as markdown, merges up to 6 images vertically (max size 30MB), and copies them to your clipboard for easy pasting into documents or applications.

Instructions

Retrieves URLs from the Internet and extracts their content as markdown. If images are found, they are merged vertically (max 6 images per group, max height 8000px, max size 30MB per group) and copied to the clipboard of the user's host machine. You will need to paste (Cmd+V) to insert the images.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
maxLengthNo
rawNo
startIndexNo
urlYes

Implementation Reference

  • index.ts:363-375 (registration)
    Registers the 'fetch' tool in the tools/list handler, specifying its name, description, and input schema.
    server.setRequestHandler( ListToolsSchema, async (request: { method: "tools/list" }, extra: RequestHandlerExtra) => { const tools = [ { name: "fetch", description: "Retrieves URLs from the Internet and extracts their content as markdown. If images are found, they are merged vertically (max 6 images per group, max height 8000px, max size 30MB per group) and copied to the clipboard of the user's host machine. You will need to paste (Cmd+V) to insert the images.", inputSchema: zodToJsonSchema(FetchArgsSchema), }, ] return { tools } },
  • Defines the input schema for the 'fetch' tool using Zod, including url, maxLength, startIndex, and raw options.
    const FetchArgsSchema = z.object({ url: z.string().url(), maxLength: z.number().positive().max(1000000).default(20000), startIndex: z.number().min(0).default(0), raw: z.boolean().default(false), })
  • The main handler for tools/call requests. For the 'fetch' tool, it validates arguments, fetches and processes the URL content using fetchUrl, handles truncation, adds image info, and returns the result as text content.
    server.setRequestHandler( CallToolSchema, async ( request: { method: "tools/call" params: { name: string; arguments?: Record<string, unknown> } }, extra: RequestHandlerExtra, ) => { try { const { name, arguments: args } = request.params if (name !== "fetch") { throw new Error(`Unknown tool: ${name}`) } const parsed = FetchArgsSchema.safeParse(args) if (!parsed.success) { throw new Error(`Invalid arguments: ${parsed.error}`) } const { content, prefix, imageUrls } = await fetchUrl( parsed.data.url, DEFAULT_USER_AGENT_AUTONOMOUS, parsed.data.raw, ) let finalContent = content if (finalContent.length > parsed.data.maxLength) { finalContent = finalContent.slice( parsed.data.startIndex, parsed.data.startIndex + parsed.data.maxLength, ) finalContent += `\n\n<e>Content truncated. Call the fetch tool with a start_index of ${ parsed.data.startIndex + parsed.data.maxLength } to get more content.</e>` } let imagesSection = "" if (imageUrls && imageUrls.length > 0) { imagesSection = "\n\nImages found in article:\n" + imageUrls.map((url) => `- ${url}`).join("\n") } return { content: [ { type: "text", text: `${prefix}Contents of ${parsed.data.url}:\n${finalContent}${imagesSection}`, }, ], } } catch (error) { return { content: [ { type: "text", text: `Error: ${error instanceof Error ? error.message : String(error)}`, }, ], isError: true, } } }, )
  • Core implementation function that fetches the URL, extracts readable content if HTML, fetches and processes images, copies images to clipboard if possible, and returns formatted content with prefix.
    async function fetchUrl( url: string, userAgent: string, forceRaw = false, ): Promise<FetchResult> { const response = await fetch(url, { headers: { "User-Agent": userAgent }, }) if (!response.ok) { throw new Error(`Failed to fetch ${url} - status code ${response.status}`) } const contentType = response.headers.get("content-type") || "" const text = await response.text() const isHtml = text.toLowerCase().includes("<html") || contentType.includes("text/html") if (isHtml && !forceRaw) { const result = extractContentFromHtml(text, url) if (typeof result === "string") { return { content: result, prefix: "", } } const { markdown, images } = result const fetchedImages = await fetchImages(images) const imageUrls = fetchedImages.map((img) => img.src) if (fetchedImages.length > 0) { try { await addImagesToClipboard(fetchedImages) return { content: markdown, prefix: `Found and processed ${fetchedImages.length} images. Images have been merged vertically (max 6 images per group) and copied to your clipboard. Please paste (Cmd+V) to combine with the retrieved content.\n`, imageUrls, } } catch (err) { return { content: markdown, prefix: `Found ${fetchedImages.length} images but failed to copy them to the clipboard.\nError: ${err instanceof Error ? err.message : String(err)}\n`, imageUrls, } } } return { content: markdown, prefix: "", imageUrls, } } return { content: text, prefix: `Content type ${contentType} cannot be simplified to markdown, but here is the raw content:\n`, } }
  • Helper function to fetch image data from URLs, handles animated GIFs by extracting first frame, returns images with buffer data.
    async function fetchImages( images: Image[], ): Promise<(Image & { data: Buffer })[]> { const fetchedImages = [] for (const img of images) { const response = await fetch(img.src) if (!response.ok) { throw new Error( `Failed to fetch image ${img.src}: status ${response.status}`, ) } const buffer = await response.arrayBuffer() const imageBuffer = Buffer.from(buffer) // Check if the image is a GIF and extract first frame if animated if (img.src.toLowerCase().endsWith(".gif")) { try { const metadata = await sharp(imageBuffer).metadata() if (metadata.pages && metadata.pages > 1) { // Extract first frame of animated GIF const firstFrame = await sharp(imageBuffer, { page: 0 }) .png() .toBuffer() fetchedImages.push({ ...img, data: firstFrame, }) continue } } catch (error) { console.warn(`Warning: Failed to process GIF image ${img.src}:`, error) } } fetchedImages.push({ ...img, data: imageBuffer, }) } return fetchedImages }

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/JeremyNixon/mcp-fetch'

If you have feedback or need assistance with the MCP directory API, please join our Discord server