Skip to main content
Glama

extract_content

Extract structured content and metadata from URLs or files like PDFs, Word docs, and YouTube transcripts using intelligent auto-engine selection. Returns JSON output for easy processing.

Instructions

Extract content from a URL or file using Content Core's auto engine.

Args: url: Optional URL to extract content from file_path: Optional file path to extract content from

Returns: JSON object containing extracted content and metadata

Raises: ValueError: If neither or both url and file_path are provided

Input Schema

NameRequiredDescriptionDefault
file_pathNo
urlNo

Input Schema (JSON Schema)

{ "properties": { "file_path": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "File Path" }, "url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Url" } }, "type": "object" }

Implementation Reference

  • MCP tool registration and handler for 'extract_content' using FastMCP @mcp.tool decorator. Thin wrapper that validates input and delegates to content_core.extract_content.
    @mcp.tool async def extract_content( url: Optional[str] = None, file_path: Optional[str] = None ) -> Dict[str, Any]: """ Extract content from a URL or file using Content Core's auto engine. Args: url: Optional URL to extract content from file_path: Optional file path to extract content from Returns: JSON object containing extracted content and metadata Raises: ValueError: If neither or both url and file_path are provided """ return await _extract_content_impl(url=url, file_path=file_path)
  • Core implementation logic for the extract_content tool, including input validation, calling content_core.extract_content, timing, error handling, and response formatting.
    async def _extract_content_impl( url: Optional[str] = None, file_path: Optional[str] = None ) -> Dict[str, Any]: """ Extract content from a URL or file using Content Core's auto engine. This is useful for processing Youtube transcripts, website content, PDFs, ePUB, Office files, etc. You can also use it to extract transcripts from audio or video files. Args: url: Optional URL to extract content from file_path: Optional file path to extract content from Returns: JSON object containing extracted content and metadata Raises: ValueError: If neither or both url and file_path are provided """ # Validate input - exactly one must be provided if (url is None and file_path is None) or ( url is not None and file_path is not None ): return { "success": False, "error": "Exactly one of 'url' or 'file_path' must be provided", "source_type": None, "source": None, "content": None, "metadata": None, } # Determine source type and validate source_type = "url" if url else "file" source = url if url else file_path # Additional validation for file paths if file_path: path = Path(file_path) if not path.exists(): return { "success": False, "error": f"File not found: {file_path}", "source_type": source_type, "source": source, "content": None, "metadata": None, } # Security check - ensure no directory traversal try: # Resolve to absolute path and ensure it's not trying to access sensitive areas path.resolve() # You might want to add additional checks here based on your security requirements except Exception as e: return { "success": False, "error": f"Invalid file path: {str(e)}", "source_type": source_type, "source": source, "content": None, "metadata": None, } # Build extraction request extraction_request = {} if url: extraction_request["url"] = url else: extraction_request["file_path"] = str(Path(file_path).resolve()) # Track start time start_time = datetime.utcnow() try: # Use Content Core's extract_content with auto engine logger.info(f"Extracting content from {source_type}: {source}") # Suppress stdout to prevent MoviePy and other libraries from interfering with MCP protocol with suppress_stdout(): result = await cc.extract_content(extraction_request) # Calculate extraction time extraction_time = (datetime.utcnow() - start_time).total_seconds() # Build response - result is a ProcessSourceOutput object response = { "success": True, "error": None, "source_type": source_type, "source": source, "content": result.content or "", "metadata": { "extraction_time_seconds": extraction_time, "extraction_timestamp": start_time.isoformat() + "Z", "content_length": len(result.content or ""), "identified_type": result.identified_type or "unknown", "identified_provider": result.identified_provider or "", }, } # Add metadata from the result if result.metadata: response["metadata"].update(result.metadata) # Add specific metadata based on source type if source_type == "url": if result.title: response["metadata"]["title"] = result.title if result.url: response["metadata"]["final_url"] = result.url elif source_type == "file": if result.title: response["metadata"]["title"] = result.title if result.file_path: response["metadata"]["file_path"] = result.file_path response["metadata"]["file_size"] = Path(file_path).stat().st_size response["metadata"]["file_extension"] = Path(file_path).suffix logger.info(f"Successfully extracted content from {source_type}: {source}") return response except Exception as e: logger.error(f"Error extracting content from {source_type} {source}: {str(e)}") return { "success": False, "error": str(e), "source_type": source_type, "source": source, "content": None, "metadata": { "extraction_timestamp": start_time.isoformat() + "Z", "error_type": type(e).__name__, }, }
  • Business logic handler for content extraction: converts input to ProcessSourceInput, invokes langgraph workflow, and returns ProcessSourceOutput.
    async def extract_content(data: Union[ProcessSourceInput, Dict]) -> ProcessSourceOutput: if isinstance(data, dict): data = ProcessSourceInput(**data) result = await graph.ainvoke(data) return ProcessSourceOutput(**result)
  • LangGraph workflow (StateGraph) that orchestrates content extraction: identifies source type, routes to appropriate processors (PDF, audio, URL, etc.), handles downloads and cleanup.
    workflow = StateGraph( ProcessSourceState, input=ProcessSourceInput, output=ProcessSourceState ) # Add nodes workflow.add_node("source", source_identification) workflow.add_node("url_provider", url_provider) workflow.add_node("file_type", file_type) workflow.add_node("extract_txt", extract_txt) workflow.add_node("extract_pdf", extract_pdf) workflow.add_node("extract_url", extract_url) workflow.add_node("extract_office_content", extract_office_content) workflow.add_node("extract_best_audio_from_video", extract_best_audio_from_video) workflow.add_node("extract_audio_data", extract_audio_data) workflow.add_node("extract_youtube_transcript", extract_youtube_transcript) workflow.add_node("delete_file", delete_file) workflow.add_node("download_remote_file", download_remote_file) # Only add docling node if available if DOCLING_AVAILABLE: workflow.add_node("extract_docling", extract_with_docling) # Add edges workflow.add_edge(START, "source") workflow.add_conditional_edges( "source", source_type_router, { "url": "url_provider", "file": "file_type", "text": END, }, ) workflow.add_conditional_edges( "file_type", file_type_router_docling, ) workflow.add_conditional_edges( "url_provider", url_type_router, { **{ m: "download_remote_file" for m in list(SUPPORTED_FITZ_TYPES) + list(SUPPORTED_OFFICE_TYPES) + list(DOCLING_SUPPORTED) if m not in ["text/html"] # Exclude HTML from file download, treat as web content }, "article": "extract_url", "text/html": "extract_url", # Route HTML content to URL extraction "youtube": "extract_youtube_transcript", }, ) workflow.add_edge("url_provider", END) workflow.add_edge("file_type", END) workflow.add_edge("extract_url", END) workflow.add_edge("extract_txt", END) workflow.add_edge("extract_youtube_transcript", END) workflow.add_edge("extract_pdf", "delete_file") workflow.add_edge("extract_office_content", "delete_file") workflow.add_edge("extract_best_audio_from_video", "extract_audio_data") workflow.add_edge("extract_audio_data", "delete_file") workflow.add_edge("delete_file", END) workflow.add_edge("download_remote_file", "file_type") graph = workflow.compile()
  • Pydantic schemas for input (ProcessSourceInput) and output (ProcessSourceOutput) used throughout the extraction pipeline.
    class ProcessSourceInput(BaseModel): content: Optional[str] = "" file_path: Optional[str] = "" url: Optional[str] = "" document_engine: Optional[str] = None url_engine: Optional[str] = None output_format: Optional[str] = None audio_provider: Optional[str] = None audio_model: Optional[str] = None class ProcessSourceOutput(BaseModel): title: Optional[str] = "" file_path: Optional[str] = "" url: Optional[str] = "" source_type: Optional[str] = "" identified_type: Optional[str] = "" identified_provider: Optional[str] = "" metadata: Optional[dict] = Field(default_factory=lambda: {}) content: Optional[str] = ""

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lfnovo/content-core'

If you have feedback or need assistance with the MCP directory API, please join our Discord server