Skip to main content
Glama
john-walkoe

USPTO Final Petition Decisions MCP Server

by john-walkoe

FPD_get_document_download

Download USPTO petition documents, decisions, and exhibits as PDFs using secure proxy links. Generate browser-accessible URLs for patent documents from the USPTO Final Petition Decisions database.

Instructions

Generate browser-accessible download URL for petition documents (PDFs) via secure proxy.

ALWAYS-ON PROXY (DEFAULT): Proxy server starts with MCP - download links work immediately.

Workflow:

  1. fpd_get_petition_details(petition_id='uuid', include_documents=True) → get documentBag

  2. fpd_get_document_download(petition_id='uuid', document_identifier='ABC123') → get download link

  3. Provide download link to user

CRITICAL RESPONSE FORMAT - Always format with BOTH clickable link and raw URL: 📁 Download {DocumentType} ({PageCount} pages) | Raw URL: {proxy_url}

Why both formats?

  • Clickable links work in Claude Desktop and most clients

  • Raw URLs enable copy/paste in Msty and other clients where links aren't clickable

Document types:

  • Petition document: Original petition filed with USPTO

  • Decision document: Director's final decision

  • Supporting exhibits: Declarations, prior art, technical documents

Parameters:

  • petition_id: Petition UUID from search results

  • document_identifier: Document identifier from documentBag

  • proxy_port: Optional (defaults to FPD_PROXY_PORT env var or 8081)

  • generate_persistent_link: Generate 7-day persistent link (default: True)

    • True: Attempts persistent link via USPTO PFW MCP (works across MCP restarts)

    • False: Session-based link (works while MCP running, no PFW required)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
petition_idYes
document_identifierYes
proxy_portNo
generate_persistent_linkNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The primary handler function implementing the FPD_get_document_download tool. It validates inputs, detects centralized proxy (PFW MCP), generates persistent or session-based download URLs, retrieves document metadata from USPTO API, registers with proxy if applicable, and provides LLM-formatted response with clickable links and guidance.
    @mcp.tool(name="FPD_get_document_download")
    @async_tool_error_handler("document_download")
    async def fpd_get_document_download(
        petition_id: str,
        document_identifier: str,
        proxy_port: Optional[int] = None,
        generate_persistent_link: bool = True
    ) -> Dict[str, Any]:
        """Generate browser-accessible download URL for petition documents (PDFs) via secure proxy.
    
    **ALWAYS-ON PROXY (DEFAULT):** Proxy server starts with MCP - download links work immediately.
    
    **Workflow:**
    1. fpd_get_petition_details(petition_id='uuid', include_documents=True) → get documentBag
    2. fpd_get_document_download(petition_id='uuid', document_identifier='ABC123') → get download link
    3. Provide download link to user
    
    **CRITICAL RESPONSE FORMAT - Always format with BOTH clickable link and raw URL:**
    **📁 [Download {DocumentType} ({PageCount} pages)]({proxy_url})** | Raw URL: `{proxy_url}`
    
    Why both formats?
    - Clickable links work in Claude Desktop and most clients
    - Raw URLs enable copy/paste in Msty and other clients where links aren't clickable
    
    **Document types:**
    - Petition document: Original petition filed with USPTO
    - Decision document: Director's final decision
    - Supporting exhibits: Declarations, prior art, technical documents
    
    **Parameters:**
    - petition_id: Petition UUID from search results
    - document_identifier: Document identifier from documentBag
    - proxy_port: Optional (defaults to FPD_PROXY_PORT env var or 8081)
    - generate_persistent_link: Generate 7-day persistent link (default: True)
      - True: Attempts persistent link via USPTO PFW MCP (works across MCP restarts)
      - False: Session-based link (works while MCP running, no PFW required)"""
        try:
            # Input validation
            if not petition_id or len(petition_id.strip()) == 0:
                return format_error_response("Petition ID cannot be empty", 400)
            if not document_identifier or len(document_identifier.strip()) == 0:
                return format_error_response("Document identifier cannot be empty", 400)
    
            # Handle persistent link generation (requires PFW MCP)
            if generate_persistent_link:
                centralized_port = os.getenv('CENTRALIZED_PROXY_PORT', '').lower()
    
                if centralized_port and centralized_port != 'none':
                    # PFW centralized proxy detected - forward to PFW for persistent link
                    try:
                        pfw_port = int(centralized_port)
                        logger.info(f"Generating persistent link via centralized USPTO PFW proxy (port {pfw_port})")
    
                        # Construct persistent link request to PFW proxy
                        # PFW proxy should have an endpoint for generating persistent links
                        # Format: POST http://localhost:8080/persistent-link
                        persistent_link_url = f"http://localhost:{pfw_port}/persistent-link"
    
                        async with httpx.AsyncClient() as client:
                            response = await client.post(
                                persistent_link_url,
                                json={
                                    "source": "fpd",
                                    "petition_id": petition_id,
                                    "document_identifier": document_identifier,
                                    "expires_days": 7
                                },
                                timeout=30.0
                            )
    
                            if response.status_code == 200:
                                result = response.json()
                                return {
                                    "success": True,
                                    "persistent_download_url": result.get("persistent_url"),
                                    "expires_in_days": 7,
                                    "note": "Generated via centralized USPTO PFW proxy - works across MCP restarts",
                                    "ecosystem_integration": "Using PFW centralized database for persistent links"
                                }
                            else:
                                # PFW proxy doesn't support persistent links yet
                                logger.warning(f"PFW proxy persistent link generation failed: {response.status_code}")
                                # Fall through to immediate link with note
    
                    except Exception as e:
                        logger.warning(f"Failed to generate persistent link via PFW: {e}")
                        # Fall through to immediate link with note
    
                # No centralized proxy or persistent link generation failed
                # Return helpful message encouraging PFW installation
                if not centralized_port:
                    return {
                        "success": False,
                        "error": "Persistent links require USPTO PFW MCP for centralized database",
                        "suggestion": "Install USPTO PFW MCP for persistent links, or use immediate links (generate_persistent_link=false)",
                        "immediate_alternative": f"Call this tool with generate_persistent_link=false for session-based download link",
                        "pfw_benefits": [
                            "7-day persistent encrypted links (work across MCP restarts)",
                            "Centralized proxy server (unified rate limiting)",
                            "Cross-MCP document sharing and caching",
                            "Complete USPTO prosecution + petition workflow"
                        ],
                        "note": "FPD provides immediate downloads only - PFW provides persistent links + enhanced features",
                        "recommendation": "Install both USPTO FPD + PFW MCPs for complete patent lifecycle analysis"
                    }
                else:
                    # PFW is available but persistent link generation failed - fallback to immediate link
                    logger.info("Persistent link generation not available, falling back to immediate link")
    
            # Enhanced proxy port detection with centralized proxy support
            if proxy_port is None:
                # Check if centralized proxy is available (and not "none")
                centralized_port = os.getenv('CENTRALIZED_PROXY_PORT', '').lower()
                if centralized_port and centralized_port != 'none':
                    proxy_port = int(centralized_port)
                    logger.info(f"Using centralized USPTO proxy on port {proxy_port}")
                else:
                    # Check FPD_PROXY_PORT first (MCP-specific), then PROXY_PORT (generic)
                    proxy_port = get_local_proxy_port()
                    logger.info(f"Using local FPD proxy on port {proxy_port}")
    
            # Start proxy server if not already running (unless using centralized proxy)
            centralized_port_check = os.getenv('CENTRALIZED_PROXY_PORT', '').lower()
            if not centralized_port_check or centralized_port_check == 'none':
                await _ensure_proxy_server_running(proxy_port)
            else:
                # Centralized proxy is already running (managed by PFW MCP)
                logger.info("Using centralized proxy - no local proxy startup needed")
    
            # Construct proxy URL (port 8081 to avoid conflict with PFW proxy on 8080)
            proxy_url = f"http://localhost:{proxy_port}/download/{petition_id}/{document_identifier}"
    
            # Also construct direct API URL for reference
            direct_url = f"{api_client.base_url}/{petition_id}/documents/{document_identifier}"
    
            # Get petition details to find document metadata
            petition_result = await api_client.get_petition_by_id(petition_id, include_documents=True)
    
            if "error" in petition_result:
                return petition_result
    
            # Extract from nested structure
            petition_data = petition_result.get(FPDFields.PETITION_DECISION_DATA_BAG, [])
            if not petition_data:
                return format_error_response("Petition data not found", 404)
    
            # Get documentBag from first petition in array
            documents = petition_data[0].get(FPDFields.DOCUMENT_BAG, [])
    
            # Find document metadata
            document_metadata = None
            for doc in documents:
                if doc.get(FPDFields.DOCUMENT_IDENTIFIER) == document_identifier:
                    document_metadata = doc
                    break
    
            if not document_metadata:
                return format_error_response(
                    f"Document {document_identifier} not found in petition {petition_id}", 404
                )
    
            # Track if centralized proxy registration succeeds
            centralized_registration_success = False
    
            # Register document with centralized proxy if using PFW
            # Check if CENTRALIZED_PROXY_PORT is set and not "none"
            centralized_port_env = os.getenv('CENTRALIZED_PROXY_PORT', '').lower()
            if centralized_port_env and centralized_port_env != 'none':
                # Extract PDF download URL from document metadata
                download_options = document_metadata.get(FPDFields.DOWNLOAD_OPTION_BAG, [])
                pdf_download_url = None
    
                for option in download_options:
                    if option.get(FPDFields.MIME_TYPE_IDENTIFIER) == 'PDF':
                        pdf_download_url = option.get(FPDFields.DOWNLOAD_URL)
                        break
    
                if pdf_download_url:
                    # Extract metadata for enhanced filename generation
                    petition_mail_date = petition_data[0].get(FPDFields.PETITION_MAIL_DATE)
                    app_number = petition_data[0].get(FPDFields.APPLICATION_NUMBER_TEXT)
                    patent_number = petition_data[0].get(FPDFields.PATENT_NUMBER)
                    doc_description = document_metadata.get(FPDFields.DOCUMENT_CODE_DESCRIPTION_TEXT)
                    doc_code = document_metadata.get(FPDFields.DOCUMENT_CODE)
    
                    # Generate enhanced filename using local proxy logic
                    enhanced_filename = generate_enhanced_filename(
                        petition_mail_date=petition_mail_date,
                        app_number=app_number,
                        patent_number=patent_number,
                        document_description=doc_description,
                        document_code=doc_code,
                        max_desc_length=40
                    )
    
                    logger.info(f"Generated enhanced filename for PFW registration: {enhanced_filename}")
    
                    try:
                        # Register FPD document with PFW centralized proxy
                        # Use the already validated centralized_port_env variable
                        pfw_port = int(centralized_port_env)
                        register_url = f"http://localhost:{pfw_port}/register-fpd-document"
    
                        # Create secure token for document access
                        access_token = mcp_auth.create_document_access_token(
                            petition_id=petition_id,
                            document_identifier=document_identifier,
                            application_number=app_number
                        )
    
                        async with httpx.AsyncClient() as client:
                            response_reg = await client.post(
                                register_url,
                                json={
                                    "source": "fpd",
                                    "petition_id": petition_id,
                                    "document_identifier": document_identifier,
                                    "download_url": pdf_download_url,
                                    "access_token": access_token,  # Secure token instead of raw API key
                                    "application_number": app_number,
                                    "enhanced_filename": enhanced_filename  # Professional filename for downloads
                                },
                                timeout=5.0
                            )
    
                            if response_reg.status_code == 200:
                                logger.info(f"✅ Successfully registered FPD document with centralized proxy")
                                centralized_registration_success = True
                            else:
                                logger.warning(
                                    f"❌ Failed to register document with centralized proxy: HTTP {response_reg.status_code}"
                                )
                                try:
                                    error_detail = response_reg.json()
                                    logger.warning(f"   Registration error details: {error_detail}")
                                except Exception:
                                    logger.warning(f"   Response body: {response_reg.text[:500]}")
    
                    except Exception as e:
                        logger.warning(f"❌ Failed to register document with centralized proxy: {e}")
    
            # Implement fallback: if centralized registration failed, use local proxy
            # Only applies if we actually tried to use centralized proxy (not "none")
            centralized_port_check = os.getenv('CENTRALIZED_PROXY_PORT', '').lower()
            if centralized_port_check and centralized_port_check != 'none' and not centralized_registration_success:
                logger.warning("⚠️  Centralized proxy registration failed - falling back to local FPD proxy")
                # Start local proxy as fallback
                local_proxy_port = get_local_proxy_port()
                await _ensure_proxy_server_running(local_proxy_port)
                # Update proxy URL to use local proxy
                proxy_url = f"http://localhost:{local_proxy_port}/download/{petition_id}/{document_identifier}"
                logger.info(f"🔄 Using local FPD proxy on port {local_proxy_port} for this download")
    
            # Determine proxy type for response metadata
            centralized_port_check = os.getenv('CENTRALIZED_PROXY_PORT', '').lower()
            proxy_type = "centralized" if (centralized_port_check and centralized_port_check != 'none' and centralized_registration_success) else "local"
            proxy_port_used = proxy_port if proxy_type == "centralized" else get_local_proxy_port()
    
            # Build response with LLM guidance for clickable links
            response = {
                "success": True,
                "petition_id": petition_id,
                "document_identifier": document_identifier,
                "proxy_download_url": proxy_url,
                "direct_url": direct_url,
                "document_info": document_metadata,
    
                # Proxy information for debugging
                "proxy_info": {
                    "type": proxy_type,
                    "port": proxy_port_used,
                    "status": "centralized_registered" if centralized_registration_success else "local_fallback"
                },
    
                # NEW: Explicit LLM guidance for proper response formatting
                "llm_response_guidance": {
                    "critical_requirement": "ALWAYS provide BOTH clickable markdown link AND raw URL",
                    "required_format": f"**📁 [Download {document_metadata.get('documentFileName', 'Document')} ({document_metadata.get('pageCount', 'N/A')} pages)]({proxy_url})** | Raw URL: `{proxy_url}`",
                    "user_expectation": "User requested a download - they need immediate browser access to the PDF",
                    "wrong_response": "Don't just show the raw URL or tool results",
                    "correct_response": "Format as clickable markdown link with document description and page count PLUS raw URL for copy/paste",
                    "explanation": "Clickable link works in Claude Desktop, raw URL enables copy/paste in Msty and other clients where links aren't clickable"
                },
    
                "access_instructions": {
                    "method": "Proxy server download (recommended) or direct API access",
                    "proxy_url": f"{proxy_url} - Click to download via secure proxy",
                    "proxy_port": proxy_port_used,
                    "proxy_note": f"Proxy handles USPTO API authentication ({proxy_type} proxy on port {proxy_port_used})",
                    "rate_limit": "USPTO allows 5 downloads per 10 seconds",
                    "file_type": "PDF document",
                    "estimated_size": f"{document_metadata.get('pageCount', 'unknown')} pages"
                },
    
                "llm_guidance": {
                    "next_steps": [
                        f"Present proxy download URL to user: ** [Download {document_metadata.get('documentFileName', 'Document')} ({document_metadata.get('pageCount', 'unknown')} pages)]({proxy_url})**",
                        "Proxy server is now running on port 8081 (started automatically)",
                        "User can click the link to download PDF directly through secure proxy",
                        "Proxy server handles USPTO API authentication automatically"
                    ],
                    "document_context": {
                        "petition_type": petition_data[0].get(FPDFields.DECISION_PETITION_TYPE_CODE_DESCRIPTION_TEXT, "Unknown"),
                        "decision_outcome": petition_data[0].get(FPDFields.DECISION_TYPE_CODE_DESCRIPTION_TEXT, "Unknown"),
                        "decision_date": petition_data[0].get(FPDFields.DECISION_DATE, "Unknown")
                    }
                },
    
                # Critical UX reminder
                "ux_critical": "The user wants this PDF file - make the download link immediately clickable!",
    
                # Response validation hints
                "response_validation": {
                    "check_for_markdown_link": "Response should contain [text](url) format",
                    "check_for_clickable_emoji": "Should start with  emoji for visual recognition",
                    "check_for_description": "Link text should describe the document type and page count",
                    "success_pattern": f"** [Download {document_metadata.get('documentFileName', 'DocumentType')} ({document_metadata.get('pageCount', 'N')} pages)](http://localhost:{proxy_port}/download/...)**"
                }
            }
    
            return response
    
        except ValueError as e:
            logger.warning(f"Validation error in get document download: {str(e)}")
            return format_error_response(str(e), 400)
        except httpx.HTTPStatusError as e:
            logger.error(f"API error in get document download: {e.response.status_code} - {e.response.text}")
            return format_error_response(f"API error: {e.response.text}", e.response.status_code)
        except httpx.TimeoutException as e:
            logger.error(f"API timeout in get document download: {str(e)}")
            return format_error_response("Request timeout - please try again", 408)
        except Exception as e:
            logger.error(f"Unexpected error in get document download: {str(e)}")
            return format_error_response(f"Internal error: {str(e)}", 500)
  • MCP tool registration decorator for FPD_get_document_download.
    @mcp.tool(name="FPD_get_document_download")
  • Helper function to run the proxy server (FastAPI app) which serves the /download/{petition_id}/{document_identifier} endpoint used by the tool's generated URLs.
    async def _run_proxy_server(port: int = 8081):
        """Run the FastAPI proxy server
    
        Uses API key from Settings (which may come from secure storage or environment variables)
        """
        try:
            import uvicorn
            from .proxy.server import create_proxy_app
    
            # Pass API key and port from Settings to proxy server
            # This allows proxy to work with secure storage (Windows DPAPI)
            app = create_proxy_app(api_key=settings.uspto_api_key, port=port)
            config = uvicorn.Config(
                app,
                host="127.0.0.1",
                port=port,
                log_level="info",
                access_log=False  # Reduce noise in logs
            )
            server = uvicorn.Server(config)
            logger.info(f"HTTP proxy server starting on http://127.0.0.1:{port}")
            await server.serve()
    
        except Exception as e:
            global _proxy_server_running
            _proxy_server_running = False
            logger.error(f"Proxy server failed: {e}")
            raise
  • Helper function that ensures the proxy server is started automatically when the tool is called (always-on or on-demand mode).
    async def _ensure_proxy_server_running(port: int = 8081):
        """Ensure the proxy server is running (auto-start on first download)"""
        global _proxy_server_running, _proxy_server_task
    
        if not _proxy_server_running:
            logger.info(f"Starting HTTP proxy server on port {port}")
    
            # Wrap background task with exception handler
            async def safe_proxy_runner():
                try:
                    await _run_proxy_server(port)
                except Exception as e:
                    logger.error(f"Proxy server crashed: {e}", exc_info=True)
                    global _proxy_server_running
                    _proxy_server_running = False
                    # Allow graceful degradation - main server continues without proxy
    
            _proxy_server_task = asyncio.create_task(safe_proxy_runner())
            _proxy_server_running = True
            # Give the server a moment to start
            await asyncio.sleep(0.5)
            logger.info(f"Proxy server started successfully on port {port}")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does so effectively. It explains the proxy behavior ('ALWAYS-ON PROXY (DEFAULT): Proxy server starts with MCP'), link persistence options ('True: Attempts persistent link via USPTO PFW MCP', 'False: Session-based link'), and response format requirements ('CRITICAL RESPONSE FORMAT - Always format with BOTH clickable link and raw URL'). The only minor gap is lack of explicit mention about authentication requirements or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, proxy info, workflow, response format, document types, parameters) and front-loads the core purpose. While comprehensive, some sections like the detailed workflow and response format justification could be slightly more concise. Every sentence earns its place by adding value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, no annotations, but with output schema), the description is remarkably complete. It covers purpose, workflow integration, behavioral details (proxy, persistence), parameter semantics, response formatting requirements, and document type context. The presence of an output schema means the description doesn't need to explain return values, and it focuses appropriately on the operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed semantic explanations for all parameters. It explains petition_id ('Petition UUID from search results'), document_identifier ('Document identifier from documentBag'), proxy_port ('Optional (defaults to FPD_PROXY_PORT env var or 8081)'), and generate_persistent_link with its two modes and implications. This goes well beyond what the bare schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'Generate browser-accessible download URL for petition documents (PDFs) via secure proxy.' This is a specific verb ('Generate') + resource ('download URL for petition documents') that clearly distinguishes it from sibling tools like Get_petition_details (which retrieves details) or FPD_get_document_content_with_mistral_ocr (which extracts content via OCR).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow guidance: '1. fpd_get_petition_details(petition_id='uuid', include_documents=True) → get documentBag 2. fpd_get_document_download(petition_id='uuid', document_identifier='ABC123') → get download link 3. Provide download link to user.' It clearly positions this as step 2 in a sequence and distinguishes it from content extraction tools by focusing on URL generation rather than document analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/john-walkoe/uspto_fpd_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server