Skip to main content
Glama

extract_pdf_pages

Extract specific pages from PDF files to create new documents. Supports both local files and URLs, allowing you to select and save only the pages you need.

Instructions

Extract specific pages from a PDF and create a new PDF.

Supports URLs for source PDF. The source PDF will be downloaded to a temporary
directory if it's a URL. Output path must be a local file path.

Args:
    source_path: Path to the source PDF file or URL to PDF
    page_numbers: List of page numbers to extract (1-indexed)
    output_path: Path where the new PDF will be saved (must be local path)
    
Returns:
    Success message with extraction details or error message

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
source_pathYes
page_numbersYes
output_pathYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The `extract_pdf_pages` function is defined as an MCP tool and handles the logic for extracting specific pages from a PDF file.
    async def extract_pdf_pages(source_path: str, page_numbers: List[int], output_path: str) -> str:
        """Extract specific pages from a PDF and create a new PDF.
        
        Supports URLs for source PDF. The source PDF will be downloaded to a temporary
        directory if it's a URL. Output path must be a local file path.
    
        Args:
            source_path: Path to the source PDF file or URL to PDF
            page_numbers: List of page numbers to extract (1-indexed)
            output_path: Path where the new PDF will be saved (must be local path)
            
        Returns:
            Success message with extraction details or error message
        """
        # Resolve source path (download if URL)
        try:
            actual_source_path = resolve_path(source_path)
            
            # Validate local path if not URL
            if not is_url(source_path):
                is_valid, error_msg = validate_path(source_path)
                if not is_valid:
                    return error_msg
        
        except Exception as e:
            return f"Error resolving source path: {str(e)}"
        
        # Validate output path (must be local)
        if is_url(output_path):
            return "Error: Output path cannot be a URL, must be a local file path"
            
        is_valid, error_msg = validate_path(output_path)
        if not is_valid:
            return error_msg
        
        try:
            with open(actual_source_path, 'rb') as source_file:
                pdf_reader = PyPDF2.PdfReader(source_file)
                total_pages = len(pdf_reader.pages)
                pdf_writer = PyPDF2.PdfWriter()
                
                extracted_pages = []
                
                for page_num in page_numbers:
                    if 1 <= page_num <= total_pages:
                        pdf_writer.add_page(pdf_reader.pages[page_num - 1])
                        extracted_pages.append(page_num)
                    else:
                        logging.warning(f"Page {page_num} is out of range (1-{total_pages}), skipping")
                
                if not extracted_pages:
                    return f"Error: No valid pages to extract from PDF (total pages: {total_pages})"
                
                # Write the new PDF
                with open(output_path, 'wb') as output_file:
                    pdf_writer.write(output_file)
                
                return f"Successfully extracted {len(extracted_pages)} pages from '{source_path}' to '{output_path}'\nExtracted pages: {extracted_pages}\nSource PDF total pages: {total_pages}"
                
        except FileNotFoundError:
            return f"Error: File not found '{actual_source_path}'"
        except Exception as e:
            return f"Error extracting pages: {str(e)}"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses useful behavioral traits: it downloads URLs to a temporary directory, requires a local output path, and returns a success/error message. However, it lacks details on permissions, rate limits, file size constraints, or error handling specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key usage notes and a structured breakdown of args and returns. Every sentence adds value without redundancy, making it efficient and easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no annotations, but with an output schema), the description is mostly complete. It explains inputs and general behavior, and the output schema handles return values, but it could include more on error cases or performance limits for thoroughness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning beyond the schema by explaining that 'source_path' can be a URL (downloaded temporarily), 'page_numbers' are 1-indexed, and 'output_path' must be local. It covers all three parameters but could provide more detail on formats or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('extract specific pages from a PDF and create a new PDF'), identifies the resource (PDF), and distinguishes it from siblings like 'merge_pdfs' (which combines PDFs) and 'read_pdf_pages' (which likely reads content without creating a new file).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (extracting pages to create a new PDF) and mentions support for URLs, but does not explicitly state when not to use it or name alternatives among siblings (e.g., 'merge_pdfs' for combining PDFs or 'read_pdf_pages' for just reading).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lockon-n/pdf-tools-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server