Skip to main content
Glama
Unstructured-IO

Unstructured API MCP Server

Official

check_llmtxt_status

Monitor the status of an llmfull.txt generation job by providing the job ID. Retrieve the current status and generated text content once the job is completed.

Instructions

Check the status of an existing llmfull.txt generation job.

Args: job_id: ID of the llmfull.txt generation job to check Returns: Dictionary containing the current status of the job and text content if completed

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
job_idYes

Implementation Reference

  • The main handler function for the 'check_llmtxt_status' tool. It takes a job_id and delegates to the generic _check_job_status helper with job_type='llmfulltxt' to fetch the status from Firecrawl API.
    async def check_llmtxt_status( job_id: str, ) -> Dict[str, Any]: """Check the status of an existing llmfull.txt generation job. Args: job_id: ID of the llmfull.txt generation job to check Returns: Dictionary containing the current status of the job and text content if completed """ return await _check_job_status(job_id, "llmfulltxt")
  • Core helper implementing the status check logic. For 'llmfulltxt' jobs, initializes FirecrawlApp, calls check_generate_llms_text_status(job_id), processes the result to include status and llmfulltxt content if completed.
    async def _check_job_status( job_id: str, job_type: Firecrawl_JobType, ) -> Dict[str, Any]: """Generic function to check the status of a Firecrawl job. Args: job_id: ID of the job to check job_type: Type of job ('crawlhtml' or 'llmtxt') Returns: Dictionary containing the current status of the job """ # Get configuration with API key config = _prepare_firecrawl_config() # Check if config contains an error if "error" in config: return {"error": config["error"]} try: # Initialize the Firecrawl client firecrawl = FirecrawlApp(api_key=config["api_key"]) # Check status based on job type if job_type == "crawlhtml": result = firecrawl.check_crawl_status(job_id) # Return a more user-friendly response for crawl jobs status_info = { "id": job_id, "status": result.get("status", "unknown"), "completed_urls": result.get("completed", 0), "total_urls": result.get("total", 0), } elif job_type == "llmfulltxt": result = firecrawl.check_generate_llms_text_status(job_id) # Return a more user-friendly response for llmfull.txt jobs status_info = { "id": job_id, "status": result.get("status", "unknown"), } # Add llmfull.txt content if job is completed if result.get("status") == "completed" and "data" in result: status_info["llmfulltxt"] = result["data"].get("llmsfulltxt", "") else: return {"error": f"Unknown job type: {job_type}"} return status_info except Exception as e: return {"error": f"Error checking {job_type} status: {str(e)}"}
  • MCP tool registration for check_llmtxt_status in the external connectors module.
    mcp.tool()(check_llmtxt_status)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Unstructured-IO/UNS-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server