Skip to main content
Glama
Sleywill

SnapAPI MCP Server

extract

Scrape URLs into Markdown, text, or structured data for LLM processing. Retrieves article content via Mozilla Readability, OG metadata, links, and images while blocking ads and HTML noise.

Instructions

Extract clean, structured content from a URL. Returns Markdown, plain text, article data (via Mozilla Readability), OG metadata, links, images, or custom structured fields. Optimized for feeding web content to LLMs without HTML noise.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to extract content from.
typeNoExtraction mode (default: markdown). 'article' uses Mozilla Readability for article body extraction. 'structured' returns title, author, word count, and cleaned content. 'metadata' returns OG tags and meta fields. 'links' and 'images' return lists of URLs.
selectorNoCSS selector to scope extraction to a specific element.
waitForNoCSS selector to wait for before extracting.
maxLengthNoMaximum character length of the returned content.
cleanOutputNoRemove excess whitespace and empty links (default: true).
darkModeNoRender the page with dark color scheme.
blockAdsNoBlock ad networks.
blockCookieBannersNoBlock cookie consent popups.
fieldsNoCustom field extraction map: keys are field names, values describe what to extract. Example: {"price": "product price as a number", "rating": "star rating out of 5"}.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable implementation context by noting the use of 'Mozilla Readability' and 'HTML noise' removal, but lacks disclosure on error handling, rate limiting, authentication requirements, or timeout behavior expected from a web extraction service.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero redundancy. It front-loads the core action ('Extract clean, structured content') and immediately follows with outputs and optimization purpose. Every word contributes to understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 10 parameters and no output schema, the description adequately covers return value types (Markdown, article data, metadata, etc.) and the custom structured fields capability. It appropriately delegates parameter details to the comprehensive schema while providing high-level use case context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the input schema has 100% description coverage, the description adds crucial semantic context by framing the tool for 'LLM feeding,' which helps agents understand the intent behind parameters like cleanOutput, blockAds, maxLength, and the custom fields object. It explains the 'why' behind the extraction modes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool extracts 'clean, structured content from a URL' and specifically lists output formats (Markdown, article data via Mozilla Readability, OG metadata, etc.). The phrase 'without HTML noise' effectively distinguishes it from the sibling 'scrape' tool, while 'feeding web content to LLMs' differentiates it from 'screenshot'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context by specifying it is 'Optimized for feeding web content to LLMs,' indicating when to use this tool. However, it does not explicitly state when NOT to use it or name specific sibling alternatives (e.g., 'use screenshot for visual captures').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Sleywill/snapapi-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server