Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Job Listings MCP ServerFind AI developer jobs in San Francisco from the last 24 hours"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Server
A standalone Python microservice that scrapes fresh job listings using Jobspy, stores them in SQLite with deduplication, and exposes a /jobs REST endpoint for embedding in a portfolio site as a live feed.
Features
Multi-site scraping
Tiered role search
Smart deduplication
APScheduler
Query filtering
CORS-enabled
Deploy-ready
Architecture
APScheduler (1hr) → Jobspy Scraper → SQLite (deduped) ← FastAPI /jobs
↕
Portfolio Site (fetch)Quick Start
1. Clone & Install
cd jobs-mcp-server
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt2. Configure
cp .env.example .env
# Edit .env as needed3. Run
python main.pyThe server starts at http://localhost:8000. An initial scrape runs automatically in the background.
API Endpoints
GET / — Health Check
{
"status": "healthy",
"service": "Job Listings MCP Server",
"total_jobs_in_db": 142,
"scrape_interval_hours": 1
}GET /jobs — List Job Listings
Query Params:
Param | Type | Description |
| string | Filter by location (substring, case-insensitive) |
| string | Filter by keyword in job title |
| int | Only jobs scraped within the last N hours |
| int | Max results (default 100, max 500) |
| int | Pagination offset |
Example:
curl "http://localhost:8000/jobs?location=San%20Francisco&keyword=AI&hours=24"Response:
{
"count": 5,
"filters": {
"location": "San Francisco",
"keyword": "AI",
"hours": 24
},
"jobs": [
{
"id": 1,
"job_title": "AI Solutions Engineer",
"company": "Acme Corp",
"location": "San Francisco, CA",
"salary": "USD 120,000–160,000/yearly",
"apply_link": "https://linkedin.com/jobs/...",
"date_posted": "2025-01-15",
"date_scraped": "2025-01-15T12:00:00+00:00",
"source_site": "linkedin",
"role_tier": "T2 — Secondary"
}
]
}POST /scrape — Manual Trigger
Triggers a scrape run in the background.
curl -X POST http://localhost:8000/scrapeGET /status — Last Scrape Status
curl http://localhost:8000/statusGET /roles — Configured Role Tiers
curl http://localhost:8000/rolesDeployment
Railway
Fork the
mcp-serverrepo to a new GitHub repo (or subdirectory).Connect Railway to the repo.
Railway auto-detects the Dockerfile.
Add a Volume at
/datato persist the SQLite DB.Set environment variables in the Railway dashboard.
Render
Create a new Web Service.
Point to the repo/directory.
Set Build Command:
pip install -r requirements.txtSet Start Command:
python main.pyAdd a Disk at
/dataand setDATA_DIR=/data.
🔗 Portfolio Integration
In your Next.js portfolio, fetch from the deployed URL:
// In a Next.js API route or client component
const API_URL = process.env.NEXT_PUBLIC_JOBS_API_URL || 'https://your-jobs-server.up.railway.app';
async function fetchJobs(filters?: { location?: string; keyword?: string; hours?: number }) {
const params = new URLSearchParams();
if (filters?.location) params.set('location', filters.location);
if (filters?.keyword) params.set('keyword', filters.keyword);
if (filters?.hours) params.set('hours', String(filters.hours));
const res = await fetch(`${API_URL}/jobs?${params.toString()}`);
return res.json();
}License
MIT
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.