Skip to main content
Glama

extract_content

Extract structured content and metadata from URLs or files using an auto-selection engine. Supports web pages, documents, videos, and audio files with JSON output.

Instructions

Extract content from a URL or file using Content Core's auto engine.

Args: url: Optional URL to extract content from file_path: Optional file path to extract content from

Returns: JSON object containing extracted content and metadata

Raises: ValueError: If neither or both url and file_path are provided

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlNo
file_pathNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Main handler implementation for the extract_content MCP tool. Validates input (exactly one of url or file_path), performs security checks, invokes the core content_core.extract_content function, processes the ProcessSourceOutput result into a standardized JSON response with success flag, content, metadata (including timings, lengths, titles, etc.), and handles exceptions.
    async def _extract_content_impl(
        url: Optional[str] = None, file_path: Optional[str] = None
    ) -> Dict[str, Any]:
        """
        Extract content from a URL or file using Content Core's auto engine. This is useful for processing Youtube transcripts, website content, PDFs, ePUB, Office files, etc. You can also use it to extract transcripts from audio or video files.
    
        Args:
            url: Optional URL to extract content from
            file_path: Optional file path to extract content from
    
        Returns:
            JSON object containing extracted content and metadata
    
        Raises:
            ValueError: If neither or both url and file_path are provided
        """
        # Validate input - exactly one must be provided
        if (url is None and file_path is None) or (
            url is not None and file_path is not None
        ):
            return {
                "success": False,
                "error": "Exactly one of 'url' or 'file_path' must be provided",
                "source_type": None,
                "source": None,
                "content": None,
                "metadata": None,
            }
    
        # Determine source type and validate
        source_type = "url" if url else "file"
        source = url if url else file_path
    
        # Additional validation for file paths
        if file_path:
            path = Path(file_path)
            if not path.exists():
                return {
                    "success": False,
                    "error": f"File not found: {file_path}",
                    "source_type": source_type,
                    "source": source,
                    "content": None,
                    "metadata": None,
                }
    
            # Security check - ensure no directory traversal
            try:
                # Resolve to absolute path and ensure it's not trying to access sensitive areas
                path.resolve()
                # You might want to add additional checks here based on your security requirements
            except Exception as e:
                return {
                    "success": False,
                    "error": f"Invalid file path: {str(e)}",
                    "source_type": source_type,
                    "source": source,
                    "content": None,
                    "metadata": None,
                }
    
        # Build extraction request
        extraction_request = {}
        if url:
            extraction_request["url"] = url
        else:
            extraction_request["file_path"] = str(Path(file_path).resolve())
    
        # Track start time
        start_time = datetime.utcnow()
    
        try:
            # Use Content Core's extract_content with auto engine
            logger.info(f"Extracting content from {source_type}: {source}")
    
            # Suppress stdout to prevent MoviePy and other libraries from interfering with MCP protocol
            with suppress_stdout():
                result = await cc.extract_content(extraction_request)
    
            # Calculate extraction time
            extraction_time = (datetime.utcnow() - start_time).total_seconds()
    
            # Build response - result is a ProcessSourceOutput object
            response = {
                "success": True,
                "error": None,
                "source_type": source_type,
                "source": source,
                "content": result.content or "",
                "metadata": {
                    "extraction_time_seconds": extraction_time,
                    "extraction_timestamp": start_time.isoformat() + "Z",
                    "content_length": len(result.content or ""),
                    "identified_type": result.identified_type or "unknown",
                    "identified_provider": result.identified_provider or "",
                },
            }
    
            # Add metadata from the result
            if result.metadata:
                response["metadata"].update(result.metadata)
    
            # Add specific metadata based on source type
            if source_type == "url":
                if result.title:
                    response["metadata"]["title"] = result.title
                if result.url:
                    response["metadata"]["final_url"] = result.url
            elif source_type == "file":
                if result.title:
                    response["metadata"]["title"] = result.title
                if result.file_path:
                    response["metadata"]["file_path"] = result.file_path
                response["metadata"]["file_size"] = Path(file_path).stat().st_size
                response["metadata"]["file_extension"] = Path(file_path).suffix
    
            logger.info(f"Successfully extracted content from {source_type}: {source}")
            return response
    
        except Exception as e:
            logger.error(f"Error extracting content from {source_type} {source}: {str(e)}")
            return {
                "success": False,
                "error": str(e),
                "source_type": source_type,
                "source": source,
                "content": None,
                "metadata": {
                    "extraction_timestamp": start_time.isoformat() + "Z",
                    "error_type": type(e).__name__,
                },
            }
  • MCP tool registration for 'extract_content' using FastMCP's @mcp.tool decorator. Defines input schema (optional url:str or file_path:str, exactly one expected) and output as Dict[str, Any] via docstring and type hints.
    @mcp.tool
    async def extract_content(
        url: Optional[str] = None, file_path: Optional[str] = None
    ) -> Dict[str, Any]:
        """
        Extract content from a URL or file using Content Core's auto engine.
    
        Args:
            url: Optional URL to extract content from
            file_path: Optional file path to extract content from
    
        Returns:
            JSON object containing extracted content and metadata
    
        Raises:
            ValueError: If neither or both url and file_path are provided
        """
        return await _extract_content_impl(url=url, file_path=file_path)
  • Helper function extract_content that adapts input to ProcessSourceInput, invokes the LangGraph extraction workflow (graph.ainvoke), and returns structured ProcessSourceOutput. This is the core extraction logic called by the MCP handler.
    async def extract_content(data: Union[ProcessSourceInput, Dict]) -> ProcessSourceOutput:
        if isinstance(data, dict):
            data = ProcessSourceInput(**data)
        result = await graph.ainvoke(data)
        return ProcessSourceOutput(**result)
  • Input/output types for core extraction: ProcessSourceInput/Dict -> ProcessSourceOutput, with TODO for explicit LangGraph schema.
    from typing import Dict, Union
    
    from content_core.common import ProcessSourceInput, ProcessSourceOutput
    from content_core.content.extraction.graph import graph
    
    # todo: input/output schema do langgraph
    
    
    async def extract_content(data: Union[ProcessSourceInput, Dict]) -> ProcessSourceOutput:
        if isinstance(data, dict):
            data = ProcessSourceInput(**data)
        result = await graph.ainvoke(data)
        return ProcessSourceOutput(**result)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the 'auto engine' but doesn't explain what this entails (e.g., extraction capabilities, limitations, or processing behavior). It also lacks details on authentication needs, rate limits, or error handling beyond the ValueError. For a tool with no annotation coverage, this leaves significant gaps in understanding its operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with clear sections for the main description, arguments, returns, and raises. Each sentence serves a purpose, and there is no redundant information. It is front-loaded with the core functionality and efficiently organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but with an output schema), the description is reasonably complete. It covers the purpose, parameters, return format, and error conditions. The presence of an output schema means the description doesn't need to detail return values, but it could benefit from more behavioral context. Overall, it provides a solid foundation but has room for improvement in transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It lists the parameters ('url' and 'file_path') and their optional nature, and the 'Raises' section clarifies that exactly one must be provided. However, it doesn't explain the semantics beyond this (e.g., supported URL formats, file types, or path requirements). The description adds some value but doesn't fully compensate for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Extract content from a URL or file using Content Core's auto engine.' It specifies the verb ('extract'), resource ('content'), and method ('auto engine'), but since there are no sibling tools, it doesn't need to differentiate from alternatives. The purpose is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance through the 'Raises' section, which indicates that exactly one of 'url' or 'file_path' must be provided. However, it lacks explicit guidance on when to use this tool versus alternatives (e.g., other extraction methods or tools), and there are no sibling tools to compare against. The guidance is functional but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lfnovo/content-core'

If you have feedback or need assistance with the MCP directory API, please join our Discord server