This LinkedIn MCP Server enables job search automation and CV optimization through integration with Anthropic's Claude. It can:
Generate Job Search URLs: Create properly formatted LinkedIn job search URLs with customizable query parameters
Retrieve Job IDs: Fetch new job IDs from LinkedIn search results with pagination support for exploring multiple pages
Extract Job Metadata: Get detailed information including title, company, description, and requirements for specific job IDs
Adapt CV to Job Descriptions: Tailor Francisco Perez-Sorrosal's CV to match specific job requirements using job ID, position, and location
Local Caching: Utilize caching mechanisms to store job descriptions and prevent redundant web scraping
The server streamlines the job application process by automating LinkedIn job discovery and enabling targeted CV customization based on extracted job requirements.
Used for hosting the MCP server that serves LinkedIn profile data to Claude
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@LinkedIn MCP Servershow my recent job applications"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
LinkedIn Job Search for Claude
An autonomous MCP server that continuously scrapes LinkedIn jobs and provides instant database-backed queries. Features background scraping profiles, application tracking, and composable response models.
Features
MCP Server (Backend)
Autonomous background scraping — Configurable profiles that scrape continuously
SQLite database with FTS5 full-text search and WAL mode for concurrent access
11 MCP tools organized into 4 categories: Query, Profile Management, Application Tracking, Analytics
Cache-first serving — Instant (<100ms) queries from local database
Async HTTP scraping with httpx (no browser required)
Composable Pydantic models — Token-efficient responses with
exclude_none=True
Integrated Features
Job Querying — Composable filters (company, location, keywords, remote, visa, posted date)
Live Exploration — On-demand scraping for 1-10 most recent jobs
Profile Management — Add/update/delete autonomous scraping profiles
Application Tracking — Track application status and notes
Company Enrichment — Automatic company metadata lookup
Job Change Detection — Audit log for field changes over time
Analytics — Database statistics and scraping profile health
Related MCP server: LinkedIn MCP Server
Prerequisites
Installation
Clone the repository and install dependencies with Pixi:
Project Structure
Running the Server
Local Development
The HTTP server runs at http://localhost:10000/mcp by default.
MCP Inspection Mode
Development Tasks
MCP Tools
The server exposes 11 tools organized into 4 categories. For detailed tool comparison, default parameters, and usage patterns, see skills/linkedin-job-search/references/tool-mapping.md.
Job Query Tools
1. explore_latest_jobs
Live scraping for 1-10 most recent jobs (10-30 seconds).
2. query_jobs
Instant database queries with composable filters (<100ms).
Profile Management Tools
3. add_scraping_profile
Add autonomous scraping profile (worker spawns within 30s).
4. list_scraping_profiles
List all scraping profiles with status.
5. update_scraping_profile
Update profile configuration (changes apply on next reload).
6. delete_scraping_profile
Disable (soft delete) or permanently delete profile.
Application Tracking Tools
7. mark_job_applied
Track job application with optional notes.
8. update_application_status
Update status (applied → interviewing → offered/rejected).
9. list_applications
Query applications by status.
Analytics Tools
10. get_cache_analytics
Database statistics, scraping profile health, application counts.
11. get_job_changes
Audit log of field changes over time.
Claude Desktop Integration
Local Configuration (stdio)
Add to claude_desktop_config.json:
Remote Configuration (HTTP)
For connecting to a remote MCP server:
Replace the host and port as needed for your deployment.
MCP Bundle (mcpb)
Build and install as an extension:
The output file linkedin-mcp-fps.mcpb is created in mcpb-package/. Double-click to install in Claude Desktop.
Claude Code Skills
Client-side skill for workflow orchestration (located in skills/):
linkedin-job-search
Interactive workflow for job searching:
Step 1: Gather search parameters (keywords, location, distance, limit)
Step 2: Choose between live exploration or database query
Step 3: Present results in scannable table
Step 4: Refine search with different filters
Step 5: Offer next actions
Activate with: "find jobs", "search positions", "job hunt".
See skills/linkedin-job-search/SKILL.md for detailed documentation.
Architecture
1. MCP Server (main.py)
Built with FastMCP framework
Configurable transport modes: stdio, streamable-http
11 async tools for job querying, profile management, application tracking, and analytics
Cache-first serving: queries return instantly from SQLite database
Auto-detects transport mode from environment variables
2. Database Layer (db.py)
SQLite with WAL mode for concurrent reads/writes
FTS5 full-text search on job descriptions and titles
5 tables: jobs, scraping_profiles, applications, company_enrichment, job_changes
Default location:
~/.linkedin-mcp/jobs.dbComposable queries with multiple filters
Performance: <100ms for typical queries
3. Background Scraper Service (background_scraper.py)
Runs continuously in MCP server process (async tasks)
One worker per scraping profile (configurable via MCP tools)
Default profile: San Francisco, CA, 25mi, "AI Engineer or ML Engineer or Principal Research Engineer", 2h refresh
Semaphore(10) for job scraping, Semaphore(2) for company enrichment
Adaptive rate limiting with exponential backoff
Graceful startup/shutdown with asyncio task coordination
4. Web Scraper (scraper.py)
Async httpx for LinkedIn Guest API (no Selenium required)
Enhanced extraction: salary parsing, remote/visa detection, skills extraction
Company name normalization for fuzzy matching
Frozen dataclasses for type safety:
JobSummary,JobDetailRate limiting with random delays (1-3s) and exponential backoff
LinkedIn API Endpoints
The system uses LinkedIn's guest API:
Job search:
https://www.linkedin.com/jobs-guest/jobs/api/seeMoreJobPostings/search-results/Job details:
https://www.linkedin.com/jobs-guest/jobs/api/jobPosting/{job_id}
Parameters:
location: Search location (URL encoded)distance: Radius in miles (10, 25, 35, 50, 75, 100)keywords: Job search query (URL encoded)start: Pagination offsetOptional filters:
f_E(experience),f_JT(job type),f_WT(work arrangement),f_TPR(time posted)
Dependencies
Package | Version | Role |
| >=0.28.1,<0.29 | Async HTTP client for LinkedIn API |
| >=1.9.2,<2 | FastMCP framework |
| >=4.13.4,<5 | HTML parsing |
| >=2.10.6,<3 | Composable response models with exclude_none |
| >=0.7.3,<0.8 | Structured logging |
Removed: selenium, requests, jsonlines, pyyaml (cache.py deleted, moved to SQLite)
All dependencies are managed via Pixi (see pyproject.toml).
Migration from JSONL Cache
If you have existing JSONL cache from v0.2.0, run the migration script:
This will:
Backup existing JSONL cache (creates
.jsonl.backup)Migrate all jobs to SQLite database at
~/.linkedin-mcp/jobs.dbTransform and populate enhanced fields (salary, remote, visa, skills)
Preserve all original job data
The migration is idempotent and can be safely rerun. After migration, the JSONL cache is no longer used.
Deployment
Remote Deployment (render.com)
Set environment variables in the deployment dashboard:
Generate requirements.txt for render.com:
Add runtime.txt with:
Usage Examples
Query Cached Jobs
Live Exploration
Profile Management
Application Tracking
Analytics
Using Skills
Troubleshooting
Issue | Solution |
Import errors | Run |
Database locked | Another process may have the database open; close other connections |
Background scraper not running | Check logs; verify profile is enabled in |
Empty query results | Database may be empty; wait for first scrape or use |
Rate limiting (429/503) | Automatic backoff; check logs for error rates |
Permission errors | Ensure |
Migration failed | Restore from |
Future Enhancements
Additional Client-Side Skills
Create workflow orchestration skills for uncovered tool categories:
Profile Management Skill — Interactive workflow for configuring autonomous scraping profiles
Application Tracking Skill — Guide user through marking applications and tracking status changes
Analytics Skill — Present cache statistics and job trends in scannable format
Currently, only job search has a dedicated skill. Other tools are accessed directly via MCP.
See CLAUDE.md Future Enhancements section for additional features (duplicate detection, ML scoring, proxy support, etc.).
Support
For issues and feature requests, visit: https://github.com/francisco-perez-sorrosal/linkedin-mcp
License
MIT License. See pyproject.toml for details.