Skip to main content
Glama

read_pdf

Extract text, tables, and image references from PDF files (local or URL) and convert them to Markdown format for easy processing and analysis.

Instructions

Read content from a PDF file (local path or URL).
Returns a unified Markdwon string containing text, tables, and image references.

Args:
    source: Local file path or URL to the PDF.
    page_range: Format "1-5" or "10". If not provided, reads all pages.
    extract_images: If True, extracts images to temp dir and links them.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sourceYes
page_rangeNo
extract_imagesNo
force_ocrNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler function for the MCP tool 'read_pdf'. It is registered via @mcp.tool() decorator. Parses PDF content using PDFParser and returns formatted Markdown with metadata and extracted text, tables, images.
    @mcp.tool()
    async def read_pdf(
        source: str,
        page_range: str = None,
        extract_images: bool = False,
        force_ocr: bool = False,
    ) -> str:
        """
        Read content from a PDF file (local path or URL).
        Returns a unified Markdwon string containing text, tables, and image references.
    
        Args:
            source: Local file path or URL to the PDF.
            page_range: Format "1-5" or "10". If not provided, reads all pages.
            extract_images: If True, extracts images to temp dir and links them.
        """
        result = await parser.parse(source, page_range, extract_images, force_ocr)
    
        # Format the result for the AI
        # We return a string content. If the client expects JSON, we can return json.dumps(result).
        # But text-based LLMs usually prefer direct text content.
        # Let's construct a rich report.
    
        metadata = result["metadata"]
        content = result["content"]
    
        report = f"""# PDF Extraction Result
        
    ## Metadata
    - **Title**: {metadata['title']}
    - **Page Count**: {metadata['page_count']}
    - **Source**: {metadata['source']}
    
    ## Content
    {content}
    """
        return report
  • The PDFParser.parse method, which contains the core logic for PDF parsing invoked by the read_pdf tool handler. Handles loading, text, image, and table extraction.
    async def parse(
        self,
        source: str,
        page_range: str = None,
        extract_images: bool = False,
        force_ocr: bool = False,
    ) -> Dict[str, Any]:
        """
        Main entry point to parse a PDF.
    
        Args:
            source: URL or local path.
            page_range: String like "1-5", "10", or None for all.
            extract_images: Whether to extract images.
    
        Returns:
            Dict containing metadata and content (markdown).
        """
        # 1. Load Document
        doc = await self.loader.load(source)
    
        try:
            # 2. Parse Page Range
            pages = self._parse_page_range(doc, page_range)
    
            # 3. Extract Text (Markdown)
            text_md = self.text_extractor.extract_text(doc, pages, force_ocr=force_ocr)
    
            # 4. Extract Images (Optional)
            images_data = []
            if extract_images:
                images_data = self.image_extractor.extract_images(doc, pages)
                # Append image markdown to text_md (simplified approach: append at end or interpolate)
                # For now, let's just keep them separate data, but maybe append to content
                if images_data:
                    text_md += "\n\n## Extracted Images\n"
                    for img in images_data:
                        text_md += f"\n{img['markdown']}\n"
    
            # 5. Extract Tables (Optional enhancement)
            # Use 'source' if it's a local path. If URL, pdfplumber needs a file-like object or path.
            # Our loader handles URL->fitz. pdfplumber needs a bit more work for URLs (stream or temp file).
            # For this MVP, let's apply a check: if fitz loaded from URL (stream), we might skip table extraction
            # OR save the fitz doc to a temp file for pdfplumber.
            # Let's save to temp file to be robust.
            temp_pdf_path = None
            if doc.name and os.path.exists(doc.name):
                # It's a local file
                pdf_path = doc.name
            else:
                # It's a stream (URL), save to temp
                import tempfile
    
                with tempfile.NamedTemporaryFile(suffix=".pdf", delete=False) as tmp:
                    doc.save(tmp.name)
                    pdf_path = tmp.name
                    temp_pdf_path = tmp.name
    
            tables_md = self.table_extractor.extract_tables(pdf_path, pages)
            if tables_md:
                text_md += "\n\n## Extracted Tables\n" + "\n\n".join(tables_md)
    
            # Cleanup temp file
            if temp_pdf_path and os.path.exists(temp_pdf_path):
                os.remove(temp_pdf_path)
    
            # 6. Construct Final Result
            metadata = {
                "page_count": len(doc),
                "title": doc.metadata.get("title", ""),
                "author": doc.metadata.get("author", ""),
                "source": source,
            }
    
            return {
                "metadata": metadata,
                "content": text_md,
                "images": [img["path"] for img in images_data],
            }
    
        finally:
            doc.close()
  • The @mcp.tool() decorator registers the read_pdf function as an MCP tool.
    @mcp.tool()
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it reads PDF content, returns a unified Markdown string, handles local/URL sources, supports page ranges, and can extract images to temp dir with links. However, it doesn't mention potential limitations like file size restrictions, supported URL types, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by return value, then parameter details in a clear 'Args:' section. Every sentence adds value without redundancy, making it easy to scan and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters with 0% schema coverage and an output schema exists, the description does well by explaining most parameters and the return format. However, it misses documenting the 'force_ocr' parameter entirely, and with no annotations, it could benefit from more behavioral context like performance characteristics or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'source' can be a local path or URL, 'page_range' uses format '1-5' or '10' and defaults to all pages, and 'extract_images' extracts images to temp dir and links them. However, it completely misses the 'force_ocr' parameter, leaving it undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Read content from a PDF file') and resource ('PDF file'), distinguishing it from the sibling tool 'get_pdf_metadata' which presumably retrieves metadata rather than content. The description explicitly mentions what it reads (text, tables, image references) and returns (unified Markdown string).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating it reads PDF content and returns Markdown, but doesn't explicitly say when to use this tool versus alternatives like 'get_pdf_metadata'. It mentions the tool can handle both local paths and URLs, which provides some context, but lacks explicit guidance on when to choose this over other PDF-related tools or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rexfelix/readPDF_mcp_server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server