Skip to main content
Glama

tool_monitor_changes

Monitor web page changes by comparing content hashes to detect modifications over time, providing change detection reports for tracking updates.

Instructions

Check if a page has changed.

Tracks content modifications over time.

Args: url: URL to monitor. previous_hash: Previous content hash to compare against.

Returns: Change detection report with content hash.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYes
previous_hashNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The actual implementation of the monitor_changes logic, which fetches the page and performs content comparison.
    async def monitor_changes(url: str, previous_content: str | None = None) -> str:
        """Check if a page has changed since last check.
    
        Args:
            url: URL to monitor.
            previous_content: Previous content hash or snippet to compare.
    
        Returns:
            Change detection report.
    
        Example:
            >>> changes = await monitor_changes("https://example.com", previous_hash)
        """
        try:
            doc = await _scraper.fetch(url, retry=1)
            current_content = doc.content
    
            # Generate content hash
            import hashlib
    
            current_hash = hashlib.sha256(current_content.encode()).hexdigest()[:16]
    
            report_lines = [
                f"# Change Monitor: {doc.title}\n",
                f"> URL: {url}\n",
                f"> Checked: {doc.fetched_at.strftime('%Y-%m-%d %H:%M:%S')}\n",
                "\n## Status\n",
            ]
    
            if previous_content:
                if previous_content == current_hash:
                    report_lines.append("✓ **No changes detected**\n")
                else:
                    report_lines.append("⚠️ **Content has changed**\n")
                    report_lines.append(f"\n- Previous hash: `{previous_content}`\n")
                    report_lines.append(f"- Current hash: `{current_hash}`\n")
            else:
                report_lines.append("ℹ️ **First check - baseline established**\n")
                report_lines.append(f"\n- Content hash: `{current_hash}`\n")
    
            # Add content preview
            report_lines.append("\n## Current Content Preview\n")
            preview = current_content[:500].strip()
            report_lines.append(f"{preview}...\n")
  • The MCP tool registration and wrapper function tool_monitor_changes that delegates to the monitor_changes helper.
    @mcp.tool()
    async def tool_monitor_changes(url: str, previous_hash: str | None = None) -> str:
        """Check if a page has changed.
    
        Tracks content modifications over time.
    
        Args:
            url: URL to monitor.
            previous_hash: Previous content hash to compare against.
    
        Returns:
            Change detection report with content hash.
        """
        return await monitor_changes(url, previous_hash)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions tracking content modifications and returning a change detection report, but fails to detail critical traits such as rate limits, authentication needs, error handling (e.g., for invalid URLs), or how the hash is computed. This leaves gaps in understanding the tool's operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first ('Check if a page has changed'), followed by additional context and parameter details. Every sentence adds value without redundancy, and the structure is clear and efficient, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but with an output schema), the description is reasonably complete. It covers the purpose, parameters, and return value, and the output schema likely details the report structure, reducing the need to explain returns. However, it lacks behavioral details like error cases or performance constraints, leaving some gaps in full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context beyond the input schema, which has 0% schema description coverage. It explains that 'url' is for monitoring and 'previous_hash' is for comparison against previous content, clarifying their roles. However, it does not specify format details (e.g., URL validation, hash algorithm) or the implication of 'previous_hash' being null, so it compensates well but not fully for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check if a page has changed' and 'Tracks content modifications over time.' It specifies the verb ('check', 'tracks') and resource ('page', 'content'), making the function evident. However, it does not explicitly differentiate this tool from siblings like 'tool_scrape_url' or 'tool_crawl_docs', which might also involve page content, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lacks context on prerequisites (e.g., needing a previous hash for comparison), exclusions, or comparisons to sibling tools like 'tool_scrape_url' for content extraction or 'tool_compare_sources' for multi-source analysis. Usage is implied only through the function, with no explicit when/when-not instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Y4NN777/devlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server