Skip to main content
Glama
Unstructured-IO

Unstructured API MCP Server

Official

check_llmtxt_status

Monitor the progress and retrieve results of an llmfull.txt generation job by providing its job ID. Returns current status and completed text content.

Instructions

Check the status of an existing llmfull.txt generation job.

Args: job_id: ID of the llmfull.txt generation job to check Returns: Dictionary containing the current status of the job and text content if completed

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
job_idYes

Implementation Reference

  • The handler function that executes the check_llmtxt_status tool. It delegates to the internal _check_job_status helper with job_type='llmfulltxt'.
    async def check_llmtxt_status( job_id: str, ) -> Dict[str, Any]: """Check the status of an existing llmfull.txt generation job. Args: job_id: ID of the llmfull.txt generation job to check Returns: Dictionary containing the current status of the job and text content if completed """ return await _check_job_status(job_id, "llmfulltxt")
  • Registration of the check_llmtxt_status tool with the MCP server using the mcp.tool() decorator.
    mcp.tool()(check_llmtxt_status)
  • Internal helper function that performs the actual status check using the Firecrawl API, handling both job types but used by check_llmtxt_status for llmfulltxt jobs.
    async def _check_job_status( job_id: str, job_type: Firecrawl_JobType, ) -> Dict[str, Any]: """Generic function to check the status of a Firecrawl job. Args: job_id: ID of the job to check job_type: Type of job ('crawlhtml' or 'llmtxt') Returns: Dictionary containing the current status of the job """ # Get configuration with API key config = _prepare_firecrawl_config() # Check if config contains an error if "error" in config: return {"error": config["error"]} try: # Initialize the Firecrawl client firecrawl = FirecrawlApp(api_key=config["api_key"]) # Check status based on job type if job_type == "crawlhtml": result = firecrawl.check_crawl_status(job_id) # Return a more user-friendly response for crawl jobs status_info = { "id": job_id, "status": result.get("status", "unknown"), "completed_urls": result.get("completed", 0), "total_urls": result.get("total", 0), } elif job_type == "llmfulltxt": result = firecrawl.check_generate_llms_text_status(job_id) # Return a more user-friendly response for llmfull.txt jobs status_info = { "id": job_id, "status": result.get("status", "unknown"), } # Add llmfull.txt content if job is completed if result.get("status") == "completed" and "data" in result: status_info["llmfulltxt"] = result["data"].get("llmsfulltxt", "") else: return {"error": f"Unknown job type: {job_type}"} return status_info except Exception as e: return {"error": f"Error checking {job_type} status: {str(e)}"}
  • Import of the check_llmtxt_status function from firecrawl.py for registration in the MCP server.
    from .firecrawl import ( cancel_crawlhtml_job, check_crawlhtml_status, check_llmtxt_status, invoke_firecrawl_crawlhtml, invoke_firecrawl_llmtxt, )

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Unstructured-IO/UNS-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server