Skip to main content
Glama
Unstructured-IO

Unstructured API MCP Server

Official

check_crawlhtml_status

Monitor the progress of an HTML crawling job by checking its current status using the crawl job ID.

Instructions

Check the status of an existing Firecrawl HTML crawl job.

Args:
    crawl_id: ID of the crawl job to check

Returns:
    Dictionary containing the current status of the crawl job

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
crawl_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The primary handler function for the 'check_crawlhtml_status' MCP tool. It takes a crawl_id and delegates to the internal _check_job_status helper to query the Firecrawl API for the job status.
    async def check_crawlhtml_status(
        crawl_id: str,
    ) -> Dict[str, Any]:
        """Check the status of an existing Firecrawl HTML crawl job.
    
        Args:
            crawl_id: ID of the crawl job to check
    
        Returns:
            Dictionary containing the current status of the crawl job
        """
        return await _check_job_status(crawl_id, "crawlhtml")
  • Core helper function that performs the actual status check by initializing FirecrawlApp client, querying the appropriate API endpoint based on job_type, and formatting the response.
    async def _check_job_status(
        job_id: str,
        job_type: Firecrawl_JobType,
    ) -> Dict[str, Any]:
        """Generic function to check the status of a Firecrawl job.
    
        Args:
            job_id: ID of the job to check
            job_type: Type of job ('crawlhtml' or 'llmtxt')
    
        Returns:
            Dictionary containing the current status of the job
        """
        # Get configuration with API key
        config = _prepare_firecrawl_config()
    
        # Check if config contains an error
        if "error" in config:
            return {"error": config["error"]}
    
        try:
            # Initialize the Firecrawl client
            firecrawl = FirecrawlApp(api_key=config["api_key"])
    
            # Check status based on job type
            if job_type == "crawlhtml":
                result = firecrawl.check_crawl_status(job_id)
    
                # Return a more user-friendly response for crawl jobs
                status_info = {
                    "id": job_id,
                    "status": result.get("status", "unknown"),
                    "completed_urls": result.get("completed", 0),
                    "total_urls": result.get("total", 0),
                }
    
            elif job_type == "llmfulltxt":
                result = firecrawl.check_generate_llms_text_status(job_id)
    
                # Return a more user-friendly response for llmfull.txt jobs
                status_info = {
                    "id": job_id,
                    "status": result.get("status", "unknown"),
                }
    
                # Add llmfull.txt content if job is completed
                if result.get("status") == "completed" and "data" in result:
                    status_info["llmfulltxt"] = result["data"].get("llmsfulltxt", "")
    
            else:
                return {"error": f"Unknown job type: {job_type}"}
    
            return status_info
        except Exception as e:
            return {"error": f"Error checking {job_type} status: {str(e)}"}
  • The registration of the check_crawlhtml_status tool using the mcp.tool() decorator within the register_external_connectors function.
    mcp.tool()(check_crawlhtml_status)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states it's a status check, implying a read-only operation, but doesn't mention potential side effects, authentication needs, rate limits, or error handling. This leaves gaps for a tool that interacts with crawl jobs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by clear sections for Args and Returns. Every sentence earns its place without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter) and the presence of an output schema (which handles return values), the description is minimally adequate. However, with no annotations and incomplete behavioral context, it could benefit from more details on usage scenarios or error cases to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the single parameter 'crawl_id' by specifying it as 'ID of the crawl job to check', which clarifies its purpose beyond the schema's basic title 'Crawl Id'. Since schema description coverage is 0%, this compensation is effective, though it doesn't detail format or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Check' and the resource 'status of an existing Firecrawl HTML crawl job', making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'check_llmtxt_status' or 'get_job_info', which might have overlapping functionality, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'check_llmtxt_status' for LLM text crawls or 'get_job_info' for general job status. It only implies usage by mentioning 'existing Firecrawl HTML crawl job', but lacks explicit when/when-not instructions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Unstructured-IO/UNS-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server