JobDataLake
Server Details
Search 1M+ enriched job listings from 20,000+ companies. Filter by skills, salary, location, seniority, remote type, and more. Free — 500 calls/day, no signup required.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 5 of 5 tools scored. Lowest: 3.3/5.
Each tool has a clearly distinct purpose with no overlap: find_similar_jobs is for AI-based similarity discovery, get_company retrieves company profiles, get_filter_options provides metadata for filtering, get_job fetches detailed job listings, and search_jobs handles broad search queries. The descriptions clearly differentiate their functions, eliminating any ambiguity.
All tool names follow a consistent verb_noun pattern with snake_case (e.g., find_similar_jobs, get_company, get_filter_options, get_job, search_jobs). The verbs are appropriate and descriptive (find, get, search), and there are no deviations in naming conventions, making the set predictable and readable.
With 5 tools, the count is well-scoped for a job data domain, covering core operations like search, retrieval, filtering, and discovery without being overwhelming. Each tool serves a specific, necessary function, and there are no redundant or trivial additions, making the set efficient and focused.
The tool surface provides comprehensive coverage for job search and discovery, including search, detailed retrieval, company profiles, filtering options, and similarity-based recommendations. Minor gaps might include update or delete operations for job data, but these are likely unnecessary for a read-only data lake, and the tools support key agent workflows effectively.
Available Tools
5 toolsfind_similar_jobsARead-onlyIdempotentInspect
Find jobs similar to a given job listing using AI vector similarity. Great for "more like this" discovery.
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | Job handle or ID to find similar jobs for | |
| per_page | No | Number of results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral hints (readOnlyHint=true, destructiveHint=false, idempotentHint=true), so the description doesn't need to repeat safety aspects. It adds value by specifying the method ('AI vector similarity') and use case, but doesn't disclose additional traits like rate limits or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste. The first sentence states the purpose and method, and the second sentence provides usage context. It's front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema), rich annotations cover safety and idempotency, and 100% schema coverage, the description is mostly complete. It could be improved by mentioning the return format (e.g., list of jobs) or similarity metrics, but annotations provide sufficient context for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters well-documented in the schema. The description doesn't add meaning beyond what the schema provides (e.g., it doesn't explain how 'job_id' is used in similarity calculations or default behavior of 'per_page'). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('find jobs similar to') and resource ('a given job listing'), using the method 'AI vector similarity'. It distinguishes from sibling tools like 'get_job' (single job) and 'search_jobs' (general search) by focusing on similarity-based discovery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Great for "more like this" discovery'), indicating it's for similarity-based recommendations. However, it doesn't explicitly state when not to use it or name alternatives like 'search_jobs' for broader queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_companyBRead-onlyIdempotentInspect
Get company profile including open job count, industry, size, and career page URL.
| Name | Required | Description | Default |
|---|---|---|---|
| company | Yes | Company domain (e.g. "stripe.com") or handle |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide strong behavioral hints (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false), so the description doesn't need to repeat safety aspects. It adds value by specifying the types of company data returned (job count, industry, etc.), which helps the agent understand the scope of information. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and lists specific data points. Every word earns its place with no redundancy or fluff, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with good annotations and no output schema, the description adequately covers what data is returned. However, it lacks details on potential limitations (e.g., what happens if the company isn't found) or response format, which could help the agent use it more effectively despite the annotations providing safety context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'company' fully documented in the schema as 'Company domain (e.g. "stripe.com") or handle'. The description doesn't add any parameter-specific details beyond what the schema provides, so it meets the baseline for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'company profile', specifying what information is included (open job count, industry, size, career page URL). It distinguishes this from siblings like 'get_job' or 'search_jobs' by focusing on company data rather than job data. However, it doesn't explicitly contrast with 'find_similar_jobs' or 'get_filter_options', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'search_jobs' or 'get_job', nor does it mention prerequisites or exclusions. It simply states what the tool does without contextual usage information, leaving the agent to infer based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_filter_optionsARead-onlyIdempotentInspect
Get available filter values (seniority levels, job functions, skills, etc.) with job counts. Useful for discovering what values to use in search filters.
| Name | Required | Description | Default |
|---|---|---|---|
| facets | No | Comma-separated facet fields to retrieve | seniority,job_function,remote_type,employment_type,required_skills |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable context about what the tool returns ('with job counts') and its practical application ('discovering what values to use in search filters'), which goes beyond the safety and idempotency information in annotations. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences that are front-loaded with the core purpose. Every word earns its place: the first sentence defines what the tool does and its key output feature, while the second provides clear usage guidance. No unnecessary information or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (1 parameter, no output schema), the description provides excellent purpose clarity and usage guidelines. With annotations covering safety and idempotency, and the schema fully documenting the single parameter, the description focuses appropriately on the tool's functional role in the workflow. The only minor gap is lack of explicit mention about response format or structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'facets' parameter fully documented. The description doesn't add any parameter-specific information beyond what's in the schema, but it does provide overall context about what types of filter values are available (seniority levels, job functions, etc.), which aligns with the schema's default value examples. Baseline 3 is appropriate given the comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'available filter values' with specific examples (seniority levels, job functions, skills, etc.). It distinguishes from sibling tools like search_jobs by focusing on metadata discovery rather than job searching, and explicitly mentions 'job counts' as a key output feature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Useful for discovering what values to use in search filters.' This directly tells the agent to use this tool for filter discovery before applying filters in search operations, clearly differentiating it from search_jobs and other siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_jobARead-onlyIdempotentInspect
Get full details for a specific job listing including description, requirements, salary, and apply link. Use the job_handle ID from search_jobs results.
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | Job handle from search results (e.g. "dropbox-senior-full-stack-software-engineer-d3f1k") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide strong behavioral hints: readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds useful context by specifying the source of the job_id ('from search_jobs results'), which helps the agent understand prerequisites. However, it doesn't disclose additional traits like rate limits, error handling, or response format, so it adds some value but not rich behavioral details beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and followed by usage guidance. Every sentence earns its place by providing essential information without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single required parameter), rich annotations covering safety and behavior, and no output schema, the description is mostly complete. It clarifies the tool's purpose, usage, and parameter source. A minor gap is the lack of information on return values or error cases, but annotations help compensate, making it sufficient for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the job_id parameter fully documented in the schema. The description adds marginal value by reinforcing that the job_id is a 'job_handle ID from search_jobs results', but doesn't provide additional syntax or format details beyond what the schema already states. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full details') and resource ('for a specific job listing'), listing concrete fields like description, requirements, salary, and apply link. It explicitly distinguishes from sibling tools by mentioning 'job_handle ID from search_jobs results', which differentiates it from search_jobs (for listing) and find_similar_jobs (for related items).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use the job_handle ID from search_jobs results'), implying it should be used after search_jobs to retrieve detailed information for a specific job. It also distinguishes from alternatives by not being for searching, filtering, or finding similar jobs, making the context clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_jobsARead-onlyIdempotentInspect
Search 1M+ job listings from 20K+ companies. Supports keyword search, AI semantic search, filters for location, salary, remote type, seniority, skills, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| query | No | Keyword search (title, company, skills). Use * for all jobs. | |
| skills | No | Comma-separated required skills, e.g. "Python,AWS,Kubernetes" | |
| company | No | Company domain filter, e.g. "stripe.com" | |
| sort_by | No | Sort: "posted_at:desc" (newest, default), "posted_at:asc" (oldest), "salary_max_usd:desc" (highest paid), "salary_min_usd:asc" (lowest paid) | |
| location | No | Location filter, e.g. "Remote", "San Francisco", "Germany" | |
| per_page | No | Results per page (max 100) | |
| countries | No | Comma-separated ISO country codes, e.g. "US,GB,DE" | |
| seniority | No | Comma-separated: Entry, Mid Level, Senior, Staff, Principal, Manager, Internship, Director, Lead, C Level | |
| salary_max | No | Maximum annual salary in USD | |
| salary_min | No | Minimum annual salary in USD | |
| remote_type | No | Remote work policy | |
| job_function | No | ||
| posted_within | No | Time window: "24h", "7d", "30d" — only jobs posted within this period | |
| semantic_query | No | AI semantic search. Works best with job-title-like queries (e.g. "machine learning engineer", "senior devops"). Supported for remote + tech jobs only. | |
| employment_type | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds useful context about scale (1M+ jobs) and AI semantic search limitations ('Supported for remote + tech jobs only'), but doesn't mention pagination behavior or rate limits. With annotations providing core behavioral traits, the description adds moderate value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first establishes scope and scale, the second enumerates search capabilities. Every phrase adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (16 parameters) and lack of output schema, the description provides good context on search capabilities and scale. However, it doesn't explain return values or result structure, which would be helpful since there's no output schema. It's mostly complete but has a minor gap in output clarification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is high at 81%, so the schema already documents most parameters well. The description lists filter types (location, salary, remote type, etc.) but doesn't add significant syntax or format details beyond what the schema provides. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches job listings with specific scale details (1M+ jobs from 20K+ companies) and lists supported search capabilities. It distinguishes from siblings like 'get_job' (single job) and 'find_similar_jobs' (similarity-based) by emphasizing broad search with filters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching jobs with various filters, but doesn't explicitly state when to use this vs. alternatives like 'find_similar_jobs' or 'get_filter_options'. It provides clear context for search scenarios but lacks explicit exclusions or sibling comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!