Skip to main content
Glama

check_robots_txt

Analyze a website's robots.txt file to determine crawl permissions and ensure compliance with ethical web scraping practices. Provides insights into allowed and disallowed paths for crawling.

Instructions

Check the robots.txt file for a domain to understand crawling permissions.

This tool helps ensure ethical scraping by checking the robots.txt file of a website to see what crawling rules are in place.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function that implements the check_robots_txt tool logic. It fetches and parses the site's robots.txt file to determine crawling permissions.
    @app.tool()
    async def check_robots_txt(
        url: Annotated[
            str,
            Field(
                ...,
                description="""网站域名 URL,必须包含协议前缀(http://或https://),将检查该域名的 robots.txt文件。
                    示例:"https://example.com"将检查"https://example.com/robots.txt"。用于确保道德抓取,遵循网站的爬虫规则""",
            ),
        ],
    ) -> RobotsResponse:
        """
        Check the robots.txt file for a domain to understand crawling permissions.
    
        This tool helps ensure ethical scraping by checking the robots.txt file
        of a website to see what crawling rules are in place.
    
        Returns:
            RobotsResponse object containing success status, robots.txt content, base domain, and content availability.
            Helps determine crawling permissions and restrictions for the specified domain.
        """
        try:
            # Validate inputs
            parsed = urlparse(url)
            if not parsed.scheme or not parsed.netloc:
                raise ValueError("Invalid URL format")
    
            logger.info(f"Checking robots.txt for: {url}")
    
            # Parse URL to get base domain
            robots_url = f"{parsed.scheme}://{parsed.netloc}/robots.txt"
    
            # Scrape robots.txt
            result = await web_scraper.simple_scraper.scrape(robots_url, extract_config={})
    
            if "error" in result:
                return RobotsResponse(
                    success=False,
                    url=url,
                    robots_txt_url=robots_url,
                    is_allowed=False,
                    user_agent="*",
                    error=f"Could not fetch robots.txt: {result['error']}",
                )
    
            robots_content = result.get("content", {}).get("text", "")
    
            return RobotsResponse(
                success=True,
                url=url,
                robots_txt_url=robots_url,
                robots_content=robots_content,
                is_allowed=True,  # Basic check, could be enhanced
                user_agent="*",
            )
    
        except Exception as e:
            logger.error(f"Error checking robots.txt for {url}: {str(e)}")
            return RobotsResponse(
                success=False,
                url=url,
                robots_txt_url="",
                is_allowed=False,
                user_agent="*",
                error=str(e),
            )
  • Pydantic model defining the output schema for the check_robots_txt tool, including fields for success status, robots.txt content, allowance, and errors.
    class RobotsResponse(BaseModel):
        """Response model for robots.txt check."""
    
        success: bool = Field(..., description="操作是否成功")
        url: str = Field(..., description="检查的URL")
        robots_txt_url: str = Field(..., description="robots.txt文件URL")
        robots_content: Optional[str] = Field(default=None, description="robots.txt内容")
        is_allowed: bool = Field(..., description="是否允许抓取")
        user_agent: str = Field(..., description="使用的User-Agent")
        error: Optional[str] = Field(default=None, description="错误信息(如果有)")
  • The @app.tool() decorator registers the check_robots_txt function as an MCP tool in the FastMCP application.
    @app.tool()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'ethical scraping' and 'crawling rules,' which imply read-only and non-destructive behavior, but it doesn't explicitly state whether this is a read operation, what permissions or rate limits apply, or what happens on errors (e.g., if the robots.txt file is missing). For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, and the second adds value by explaining the ethical context. There's no wasted text, and every sentence contributes meaningfully to understanding the tool's use.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter) and the presence of an output schema (which likely covers return values), the description is reasonably complete. It explains what the tool does and why to use it, though it could improve by addressing behavioral aspects like error handling or rate limits. The output schema reduces the need to describe return values in the description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter ('url') with 0% schema description coverage (no title or description in the schema). The description doesn't add any parameter-specific details beyond implying the 'url' should be a domain for checking robots.txt. Since schema coverage is low, the description doesn't fully compensate by explaining the parameter's format or constraints, but it does provide some context through the tool's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check the robots.txt file for a domain to understand crawling permissions.' It specifies the verb ('check'), resource ('robots.txt file'), and goal ('understand crawling permissions'). However, it doesn't explicitly differentiate this from sibling tools like 'get_page_info' or 'scrape_webpage' that might also retrieve web content, though the focus on robots.txt is specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context: 'This tool helps ensure ethical scraping by checking the robots.txt file...' This suggests it should be used before scraping to comply with rules, but it doesn't explicitly state when to use this tool versus alternatives (e.g., 'scrape_webpage' for general content) or when not to use it (e.g., for non-web domains). The guidance is helpful but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ThreeFish-AI/scrapy-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server