Skip to main content
Glama

Server Details

JobOracle Job Market Intelligence MCP

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/joboracle
GitHub Stars
0
Server Listing
JobOracle

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.8/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but job_search and remote_jobs could overlap in functionality for remote job searches, potentially causing confusion. The other tools like company_jobs, job_compare, job_trends, and salary_insights are clearly differentiated by their specific focuses on company-specific listings, market comparisons, trend analysis, and salary data, respectively.

Naming Consistency4/5

Tool names follow a consistent snake_case pattern throughout, such as company_jobs and job_search, which aids readability. However, there is a minor deviation with health_check, which uses a different verb style (health instead of job-related terms), though it still fits the overall naming convention.

Tool Count5/5

With 8 tools, the count is well-scoped for a job market analysis server, covering key areas like job searching, company insights, trends, and salary data. Each tool appears to serve a specific, useful function without redundancy, making the set manageable and comprehensive for the domain.

Completeness4/5

The tool surface covers core job market operations, including search, comparison, trends, and salary insights, with no major gaps. However, there might be minor omissions, such as tools for detailed job application processes or user-specific job tracking, but these are not essential for the server's apparent purpose of market analysis and insights.

Available Tools

8 tools
company_jobsCInspect

All open positions at a specific company.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name
countryNode
results_per_pageNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states it retrieves 'open positions' but doesn't mention whether this is a read-only operation, if authentication is required, rate limits, pagination behavior, or what format the results come in. For a tool with 3 parameters and no output schema, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that gets straight to the point with zero wasted words. It's appropriately sized for a simple retrieval tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters (with only 33% schema coverage), no annotations, and no output schema, the description is insufficient. It doesn't explain what the tool returns, how results are structured, or provide enough context about the undocumented parameters to make this tool usable without additional documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 33% (only 'company' parameter has a description). The description mentions 'specific company' which aligns with the 'company' parameter, but doesn't add any meaning for the undocumented 'country' and 'results_per_page' parameters. With low schema coverage, the description fails to compensate adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does ('All open positions at a specific company'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'job_search' or 'remote_jobs', which likely have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'job_search' or 'remote_jobs'. There's no mention of prerequisites, exclusions, or comparative context with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkCInspect

Server status.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. 'Server status' implies a read-only, non-destructive operation, but it doesn't disclose behavioral traits such as authentication needs, rate limits, response format, or error conditions. The description is minimal and lacks necessary context for safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with 'Server status.' as a single phrase, which is appropriately sized for a simple tool. It's front-loaded and wastes no words, though it could benefit from slightly more detail without losing efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema), the description is incomplete. It lacks context on what 'status' includes (e.g., health metrics, uptime), response format, or integration with sibling tools. Without annotations or output schema, more detail is needed for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description doesn't add parameter semantics, but this is appropriate given the absence of parameters. Baseline is 4 for zero parameters, as the schema fully covers the input requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Server status' states what the tool does at a high level but lacks specificity. It identifies the resource (server) and action (checking status) but doesn't distinguish from siblings or provide details about what aspects of server status are checked. It's not tautological but remains vague about scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention context, prerequisites, or exclusions, and it doesn't relate to sibling tools like company_jobs or job_search. Usage is implied only by the tool name and generic description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

job_alerts_checkCInspect

How many new jobs posted today/3 days/week for a query. Quick market pulse.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesKeywords
countryNode
locationNoCity or region
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions timeframes (today/3 days/week) and the tool's quick nature, but lacks details on permissions, rate limits, data sources, or response format. For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, consisting of two short sentences that directly state the tool's function and intent. Every sentence adds value without redundancy, though it could be slightly more structured (e.g., by explicitly listing timeframes).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (3 parameters, no annotations, no output schema), the description is incomplete. It doesn't cover behavioral aspects like data freshness, error handling, or output format, and with low schema coverage (67%), it fails to compensate for gaps in parameter understanding, making it inadequate for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67% (2 out of 3 parameters have descriptions), so the baseline is 3. The description adds no additional parameter semantics beyond what the schema provides (e.g., it doesn't explain how 'query' interacts with timeframes or what 'country' and 'location' defaults imply).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to count new jobs posted within specific timeframes (today/3 days/week) for a query, providing a 'quick market pulse.' It uses specific verbs ('check,' 'posted') and identifies the resource ('jobs'), though it doesn't explicitly differentiate from siblings like job_search or job_trends.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'quick market pulse,' which implies a high-level overview, but doesn't specify scenarios, exclusions, or compare it to siblings such as job_search (detailed listings) or job_trends (historical analysis).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

job_compareCInspect

Compare job markets across cities for a role: job count and salaries.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesJob title
countryNode
locationsYesList of cities to compare
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the outputs (job count and salaries) but lacks details on data sources, freshness, rate limits, authentication needs, or error handling. For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every element ('compare job markets', 'across cities', 'for a role', 'job count and salaries') contributes directly to understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and moderate schema coverage, the description is incomplete. It doesn't address behavioral aspects like data reliability or usage constraints, nor does it explain output format or error cases, leaving gaps for a tool with 3 parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67% (2 out of 3 parameters described). The description adds minimal value beyond the schema, as it doesn't explain parameter interactions, format expectations for 'locations', or the role of 'country'. Baseline 3 is appropriate since the schema does moderate lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: comparing job markets across cities for a specific role, with metrics on job count and salaries. It uses specific verbs ('compare') and resources ('job markets', 'cities', 'role'), though it doesn't explicitly distinguish from siblings like 'salary_insights' or 'job_trends'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'salary_insights' or 'job_search'. It states what the tool does but offers no context on appropriate scenarios, prerequisites, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

remote_jobsCInspect

Find remote-only positions for a role or skill.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesKeywords
countryNode
results_per_pageNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool finds remote-only positions but doesn't mention any behavioral traits such as data sources, rate limits, authentication requirements, or response format. This leaves significant gaps for a search tool with potential external dependencies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is appropriately sized and front-loaded, making it easy to understand at a glance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a job search tool with no annotations, low schema coverage, and no output schema, the description is incomplete. It lacks information on behavioral aspects, parameter details beyond the basic purpose, and output expectations, making it inadequate for effective tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low at 33%, with only the 'query' parameter having a description ('Keywords'). The description adds no parameter semantics beyond what's implied by the tool's purpose (e.g., it doesn't explain 'country' or 'results_per_page' defaults or usage). This fails to compensate for the schema's lack of detail on two out of three parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Find') and resource ('remote-only positions'), and specifies the scope ('for a role or skill'). However, it doesn't explicitly differentiate from sibling tools like 'job_search' or 'company_jobs', which might also search for jobs but with different filters or scopes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'job_search' and 'company_jobs' available, there's no indication of when this tool is preferred (e.g., for remote-only filtering) or when other tools might be more appropriate, leaving usage context unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

salary_insightsCInspect

Salary ranges for a role in a location.

ParametersJSON Schema
NameRequiredDescriptionDefault
roleYesJob title, e.g. 'Sales Manager'
countryNode
locationNoCity or region
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions what data is retrieved (salary ranges) but doesn't cover critical aspects like data sources, accuracy, update frequency, rate limits, authentication needs, or error handling. For a data retrieval tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of a single, clear sentence that directly states the tool's purpose. There is no wasted language or unnecessary elaboration, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of salary data retrieval (which involves nuanced parameters and potential data limitations), the description is insufficient. With no annotations, no output schema, and only moderate parameter coverage, it fails to address key contextual elements like data format, currency, time periods, or reliability. The description alone doesn't provide enough information for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67% (2 out of 3 parameters have descriptions), which is moderate. The description adds minimal value beyond the schema by implying that 'role' and 'location' are used together to scope the salary data, but it doesn't explain parameter interactions (e.g., how 'country' and 'location' relate) or provide examples. With partial schema coverage, the description doesn't fully compensate, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving salary ranges for specific roles in specific locations. It includes both the verb ('ranges for') and resources ('role', 'location'), making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'job_compare' or 'job_trends' which might also involve salary data, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'job_compare' and 'job_trends' that might overlap in functionality, there's no indication of when this tool is preferred or what distinguishes it from them. The description only states what it does, not when to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.