Skip to main content
Glama
thein-art

mcp-server-peecai

by thein-art

Get URL Content

get_url_content
Read-onlyIdempotent

Retrieve scraped markdown content and metadata from tracked URLs to analyze brand mentions in AI-generated answers across ChatGPT, Perplexity, and other models.

Instructions

Get the scraped markdown content of a source URL. Use the URLs report (get_urls_report) to discover URLs. Returns markdown content plus metadata (title, domain, channel_title, classification, url_classification, content_length, truncated, content_updated_at). If stored content exceeds max_length, the response is truncated and truncated=true — re-request with a larger max_length to get more. Returns 404 if the URL is not tracked by the project.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL to fetch content for. Discover URLs via get_urls_report.
project_idNoProject ID (uses PEECAI_PROJECT_ID env if omitted). Call list_projects to find IDs.
max_lengthNoMaximum number of characters of content to return (1-20,000,000). Default 100,000.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
_summaryYesHuman-readable summary of the result
contentYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains truncation behavior ('If stored content exceeds max_length, the response is truncated and truncated=true — re-request with a larger max_length to get more') and error handling ('Returns 404 if the URL is not tracked by the project'). Annotations already cover read-only, non-destructive, idempotent, and closed-world hints, so the description complements them without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage guidance, return details, and behavioral notes. Every sentence earns its place by providing essential information without redundancy. It's appropriately sized for a tool with three parameters and rich annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (fetching content with truncation and error handling), the description is complete. It covers purpose, usage, behavior, and output details (markdown content plus metadata). With annotations covering safety and idempotency, and an output schema presumably detailing the return structure, no critical gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing full parameter documentation. The description adds minimal semantics: it mentions using get_urls_report to discover URLs (relevant to the 'url' parameter) and explains the effect of max_length on truncation. However, it doesn't add significant meaning beyond what the schema already describes, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the scraped markdown content') and resource ('source URL'), distinguishing it from siblings like get_urls_report (which discovers URLs) and get_chat_content (which handles chat content). It explicitly mentions what it returns (markdown content plus metadata), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Use the URLs report (get_urls_report) to discover URLs') and when not to use it ('Returns 404 if the URL is not tracked by the project'). It names an alternative tool (get_urls_report) for discovering URLs, offering clear context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/thein-art/mcp-server-peecai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server