extract_content
Extract structured content and metadata from URLs or files using an auto-selection engine. Supports web pages, documents, videos, and audio files with JSON output.
Instructions
Extract content from a URL or file using Content Core's auto engine.
Args: url: Optional URL to extract content from file_path: Optional file path to extract content from
Returns: JSON object containing extracted content and metadata
Raises: ValueError: If neither or both url and file_path are provided
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| url | No | ||
| file_path | No |
Implementation Reference
- src/content_core/mcp/server.py:43-175 (handler)Main handler implementation for the extract_content MCP tool. Validates input (exactly one of url or file_path), performs security checks, invokes the core content_core.extract_content function, processes the ProcessSourceOutput result into a standardized JSON response with success flag, content, metadata (including timings, lengths, titles, etc.), and handles exceptions.async def _extract_content_impl( url: Optional[str] = None, file_path: Optional[str] = None ) -> Dict[str, Any]: """ Extract content from a URL or file using Content Core's auto engine. This is useful for processing Youtube transcripts, website content, PDFs, ePUB, Office files, etc. You can also use it to extract transcripts from audio or video files. Args: url: Optional URL to extract content from file_path: Optional file path to extract content from Returns: JSON object containing extracted content and metadata Raises: ValueError: If neither or both url and file_path are provided """ # Validate input - exactly one must be provided if (url is None and file_path is None) or ( url is not None and file_path is not None ): return { "success": False, "error": "Exactly one of 'url' or 'file_path' must be provided", "source_type": None, "source": None, "content": None, "metadata": None, } # Determine source type and validate source_type = "url" if url else "file" source = url if url else file_path # Additional validation for file paths if file_path: path = Path(file_path) if not path.exists(): return { "success": False, "error": f"File not found: {file_path}", "source_type": source_type, "source": source, "content": None, "metadata": None, } # Security check - ensure no directory traversal try: # Resolve to absolute path and ensure it's not trying to access sensitive areas path.resolve() # You might want to add additional checks here based on your security requirements except Exception as e: return { "success": False, "error": f"Invalid file path: {str(e)}", "source_type": source_type, "source": source, "content": None, "metadata": None, } # Build extraction request extraction_request = {} if url: extraction_request["url"] = url else: extraction_request["file_path"] = str(Path(file_path).resolve()) # Track start time start_time = datetime.utcnow() try: # Use Content Core's extract_content with auto engine logger.info(f"Extracting content from {source_type}: {source}") # Suppress stdout to prevent MoviePy and other libraries from interfering with MCP protocol with suppress_stdout(): result = await cc.extract_content(extraction_request) # Calculate extraction time extraction_time = (datetime.utcnow() - start_time).total_seconds() # Build response - result is a ProcessSourceOutput object response = { "success": True, "error": None, "source_type": source_type, "source": source, "content": result.content or "", "metadata": { "extraction_time_seconds": extraction_time, "extraction_timestamp": start_time.isoformat() + "Z", "content_length": len(result.content or ""), "identified_type": result.identified_type or "unknown", "identified_provider": result.identified_provider or "", }, } # Add metadata from the result if result.metadata: response["metadata"].update(result.metadata) # Add specific metadata based on source type if source_type == "url": if result.title: response["metadata"]["title"] = result.title if result.url: response["metadata"]["final_url"] = result.url elif source_type == "file": if result.title: response["metadata"]["title"] = result.title if result.file_path: response["metadata"]["file_path"] = result.file_path response["metadata"]["file_size"] = Path(file_path).stat().st_size response["metadata"]["file_extension"] = Path(file_path).suffix logger.info(f"Successfully extracted content from {source_type}: {source}") return response except Exception as e: logger.error(f"Error extracting content from {source_type} {source}: {str(e)}") return { "success": False, "error": str(e), "source_type": source_type, "source": source, "content": None, "metadata": { "extraction_timestamp": start_time.isoformat() + "Z", "error_type": type(e).__name__, }, }
- src/content_core/mcp/server.py:177-194 (registration)MCP tool registration for 'extract_content' using FastMCP's @mcp.tool decorator. Defines input schema (optional url:str or file_path:str, exactly one expected) and output as Dict[str, Any] via docstring and type hints.@mcp.tool async def extract_content( url: Optional[str] = None, file_path: Optional[str] = None ) -> Dict[str, Any]: """ Extract content from a URL or file using Content Core's auto engine. Args: url: Optional URL to extract content from file_path: Optional file path to extract content from Returns: JSON object containing extracted content and metadata Raises: ValueError: If neither or both url and file_path are provided """ return await _extract_content_impl(url=url, file_path=file_path)
- Helper function extract_content that adapts input to ProcessSourceInput, invokes the LangGraph extraction workflow (graph.ainvoke), and returns structured ProcessSourceOutput. This is the core extraction logic called by the MCP handler.async def extract_content(data: Union[ProcessSourceInput, Dict]) -> ProcessSourceOutput: if isinstance(data, dict): data = ProcessSourceInput(**data) result = await graph.ainvoke(data) return ProcessSourceOutput(**result)
- Input/output types for core extraction: ProcessSourceInput/Dict -> ProcessSourceOutput, with TODO for explicit LangGraph schema.from typing import Dict, Union from content_core.common import ProcessSourceInput, ProcessSourceOutput from content_core.content.extraction.graph import graph # todo: input/output schema do langgraph async def extract_content(data: Union[ProcessSourceInput, Dict]) -> ProcessSourceOutput: if isinstance(data, dict): data = ProcessSourceInput(**data) result = await graph.ainvoke(data) return ProcessSourceOutput(**result)