Skip to main content
Glama

Server Details

MCP server for 8,000+ AI/ML jobs with 8 tools: search, match, companies, stats, salaries, tags.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 8 of 8 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: get_company, get_job, get_salary_data, get_stats, list_companies, list_tags, match_jobs, and search_jobs all target different aspects of the AI job domain. Descriptions reinforce unique functions, making tool selection straightforward for an agent.

Naming Consistency5/5

Tool names follow a consistent verb_noun pattern throughout, such as get_company, list_companies, search_jobs, and match_jobs. All names use snake_case and clear verbs like 'get', 'list', 'search', and 'match', providing a predictable and readable structure.

Tool Count5/5

With 8 tools, the server is well-scoped for its AI job domain, covering discovery, search, matching, and analysis without bloat. Each tool earns its place, offering a balanced set that supports comprehensive workflows without being overwhelming.

Completeness4/5

The tool surface provides strong coverage for job discovery, search, matching, and data analysis, including CRUD-like operations (e.g., get, list, search). A minor gap exists in update/delete operations, but these are not expected in a read-only job listing context, and agents can work around this with the available tools.

Available Tools

8 tools
get_companyGet Company ProfileAInspect

Get detailed profile for an AI company: total open roles, salary range, top tags/skills, workplace distribution, and apply URL. Use after list_companies or search_jobs to learn more about a specific employer.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesCompany slug (e.g. 'anthropic', 'openai', 'deepmind'). Use the slug from search_jobs or list_companies results.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It implies a read-only operation ('Get') but doesn't disclose behavioral traits like authentication needs, rate limits, error handling, or response format. The description adds some context about the data returned but lacks operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and details, and the second provides usage guidance. Every sentence adds value without redundancy, making it appropriately sized and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is reasonably complete. It covers purpose, usage context, and data returned, though it could benefit from more behavioral transparency (e.g., response format or error cases) to fully compensate for the lack of annotations and output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single required parameter 'slug' with its type and usage example. The description doesn't add any parameter-specific information beyond what the schema provides, maintaining the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed profile') and resource ('an AI company'), listing concrete data points like total open roles, salary range, top tags/skills, workplace distribution, and apply URL. It distinguishes from siblings by focusing on company profiles rather than jobs, salaries, or listings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Use after list_companies or search_jobs to learn more about a specific employer'), providing clear context and naming specific alternative tools for different purposes, which helps the agent select the right tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_jobGet Job DetailsAInspect

Fetch full job posting by ID or slug: title, description, requirements, salary, workplace, tags, apply URL. Use after search_jobs to read a specific listing.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesJob ID (UUID) or slug
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes what the tool does (fetch details) and the input (ID or slug), but lacks behavioral details like error handling, permissions needed, or rate limits. It adds some context (e.g., fields returned) but doesn't fully compensate for the missing annotations, resulting in an average score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by a usage guideline, all in two efficient sentences. Every sentence adds value without redundancy, making it appropriately sized and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is mostly complete: it states the purpose, usage, and fields returned. However, it lacks details on output format or error cases, which could be helpful. Since no output schema exists, some explanation of return values might be expected, but the description is sufficient for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'id' parameter documented as 'Job ID (UUID) or slug'. The description adds marginal value by reiterating 'by ID or slug' and implying it's used after search_jobs, but doesn't provide additional syntax or format details beyond the schema. Baseline 3 is appropriate when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('fetch full job posting') and resource ('by ID or slug'), listing specific fields returned (title, description, etc.). It distinguishes from sibling tools by mentioning 'use after search_jobs' to get a specific listing, differentiating from the broader search function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('use after search_jobs to read a specific listing'), providing clear context for its application. It implies an alternative (search_jobs) for initial discovery, though it doesn't explicitly state when not to use it, but the guidance is sufficient for a high score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_salary_dataGet Salary BenchmarksAInspect

Salary benchmarks for AI/ML roles. Filter by tag (e.g. 'llm', 'pytorch'), experience level, workplace type, or company. Returns average, median, p25, p75, min, max, and sample count. Useful for compensation research and negotiation.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagNoFilter by tag (e.g. 'llm', 'pytorch', 'agents')
levelNoExperience level
companyNoCompany slug (e.g. 'anthropic', 'openai')
workplaceNoWorkplace type
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes what the tool returns (statistical metrics) and its use cases, but doesn't mention important behavioral aspects like whether this is a read-only operation, potential rate limits, data freshness, or authentication requirements. The description adds value but leaves significant behavioral questions unanswered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise - three sentences that each earn their place. The first establishes purpose, the second explains filtering and returns, the third provides use cases. No wasted words, front-loaded with core functionality, and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read operation with no output schema and no annotations, the description provides adequate but incomplete context. It explains what data is returned but not the format or structure. Given the statistical nature of the returns (average, median, percentiles), more detail about the output format would be helpful. The description covers basics but leaves gaps for a tool that returns complex statistical data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents all four parameters thoroughly. The description mentions filtering by the same parameters but doesn't add meaningful semantic context beyond what's in the schema descriptions. It provides a high-level overview but no additional syntax, format, or constraint details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get', 'Filter', 'Returns') and resources ('Salary benchmarks for AI/ML roles'). It distinguishes itself from sibling tools like 'get_job' or 'search_jobs' by focusing specifically on compensation data rather than job listings or company information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context ('Useful for compensation research and negotiation') but doesn't explicitly state when to use this tool versus alternatives like 'get_stats' or 'list_companies'. No guidance is given about when NOT to use this tool or what specific scenarios warrant its selection over sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_statsGet Index StatsBInspect

Current AI Dev Jobs index statistics: total active jobs, companies hiring, median salary, new jobs this week.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the data returned but doesn't mention behavioral traits such as whether this is a read-only operation, potential rate limits, authentication needs, or how frequently the statistics are updated. For a tool with zero annotation coverage, this leaves significant gaps in understanding its operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and lists key metrics without unnecessary words. Every element earns its place by directly informing the tool's function, making it highly concise and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is adequate but has clear gaps. It explains what data is returned but doesn't cover behavioral aspects like safety or performance. For a read-like tool with no structured metadata, more context on usage constraints would improve completeness, though it meets the minimum viable threshold.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately adds context by specifying the types of statistics retrieved (e.g., total active jobs, median salary), which provides semantic value beyond the empty schema. This compensates well for the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: retrieving statistics about the AI Dev Jobs index, including specific metrics like total active jobs, companies hiring, median salary, and new jobs this week. It uses a specific verb ('get') and resource ('index statistics'), though it doesn't explicitly distinguish itself from sibling tools like 'get_job' or 'search_jobs' beyond the statistical focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'get_job', 'list_companies', or 'search_jobs'. It implies usage for statistical overviews but lacks explicit when-to-use or when-not-to-use instructions, leaving the agent to infer context based on sibling tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_companiesList Top Hiring CompaniesBInspect

Returns top AI/ML companies by active role count, optionally with average salary. Useful for discovering who is hiring in AI.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax companies (default 20, max 50)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool returns data 'by active role count' and 'optionally with average salary,' which gives some context about output behavior. However, it doesn't disclose critical traits like whether this is a read-only operation (implied but not stated), potential rate limits, data freshness, or authentication needs. For a tool with no annotations, this leaves significant gaps in understanding its operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured in two sentences. The first sentence clearly states the core functionality, and the second sentence adds practical context without redundancy. Every word earns its place, making it easy to parse and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter, no output schema, no annotations), the description is minimally adequate. It covers the purpose and basic usage but lacks details on behavioral traits, output format, or differentiation from siblings. Without annotations or an output schema, the description should do more to explain what the return data looks like (e.g., list structure, fields included) and any limitations, but it provides a functional overview that meets basic needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 100% description coverage, providing details on 'limit' (type, range, default). The description doesn't add any parameter-specific information beyond what's in the schema, such as explaining how 'limit' affects ranking or output. Since schema coverage is high, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't need to heavily supplement the well-documented schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Returns top AI/ML companies by active role count, optionally with average salary.' It specifies the verb ('returns'), resource ('top AI/ML companies'), and key criteria ('by active role count'). However, it doesn't explicitly differentiate from sibling tools like 'get_stats' or 'search_jobs', which might also provide company-related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance: 'Useful for discovering who is hiring in AI.' This suggests the tool is for exploration and hiring insights. However, it lacks explicit guidance on when to use this tool versus alternatives like 'get_stats' (which might provide broader statistics) or 'search_jobs' (which might focus on specific job listings). No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tagsList Available TagsAInspect

Returns all available job tags/skills with the count of active jobs for each. Use to discover what tags are available for search_jobs filtering.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes the return data (tags with counts) but lacks details on behavioral traits like pagination, rate limits, or authentication needs. The description is accurate but minimal for behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states purpose and output, the second provides usage guidance. It is front-loaded and appropriately sized for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is nearly complete. It covers purpose, usage, and output semantics, though it could add more behavioral details like response format or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema coverage, the baseline is 4. The description adds no parameter info, but this is acceptable since there are no parameters to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Returns') and resource ('all available job tags/skills'), plus the specific data returned ('with the count of active jobs for each'). It distinguishes from siblings like 'search_jobs' by focusing on tag discovery rather than job filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool ('Use to discover what tags are available for search_jobs filtering'), providing clear context and an alternative ('search_jobs') for actual filtering operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

match_jobsMatch Jobs to Candidate ProfileAInspect

Rank active AI/ML jobs against a candidate profile (skills, salary range, workplace, level). Scoring combines tag overlap (+2 per match), salary overlap (+3), workplace/level/type/location matches, and description keyword hits. Use this when an agent is choosing which role to surface to its user — it returns pre-ranked matches with scoring explanations.

ParametersJSON Schema
NameRequiredDescriptionDefault
levelNo
limitNoMax results (default 10)
skillsYesCandidate skills/tags (e.g. ['python','llm','pytorch'])
job_typeNo
locationNoLocation substring (e.g. 'San Francisco')
workplaceNo
salary_maxNoMaximum salary USD
salary_minNoMinimum salary USD
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it ranks jobs, uses a specific scoring algorithm (tag overlap +2, salary overlap +3, workplace/level/type/location matches, description keyword hits), and returns pre-ranked matches with scoring explanations. However, it doesn't mention rate limits, authentication needs, or whether this is a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that are front-loaded with purpose and usage context. Every element earns its place: first sentence explains what the tool does and scoring details, second sentence provides usage guidance and output format. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex ranking tool with 8 parameters, no annotations, and no output schema, the description does well by explaining the scoring algorithm and output format. However, it could better address parameter interactions (e.g., how multiple criteria combine) and doesn't mention error conditions or what happens when no matches are found.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 63% (5 of 8 parameters have descriptions), so the description needs to compensate. It adds meaningful context by explaining how parameters are used in the scoring algorithm (skills for tag overlap, salary range for salary overlap, workplace/level for matching). However, it doesn't clarify the semantics for 'job_type' or 'location' parameters beyond mentioning they're part of matching.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Rank') and resource ('active AI/ML jobs against a candidate profile'), listing the specific criteria used (skills, salary range, workplace, level). It distinguishes itself from sibling tools like 'get_job' or 'search_jobs' by emphasizing ranking/scoring rather than basic retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool: 'when an agent is choosing which role to surface to its user.' This provides clear context for application. It differentiates from siblings by focusing on ranking/scoring rather than simple retrieval or listing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_jobsSearch AI/ML JobsBInspect

Search curated AI/ML engineering roles. Filter by tags (e.g. llm, pytorch), workplace (remote/hybrid/onsite), experience level, salary range, or keyword. Results are ranked by quality score and recency.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoTag filter (e.g. ['llm', 'pytorch']). Returns jobs with ANY of these tags.
levelNoExperience level
limitNoMax results (default 10, max 25)
queryNoKeyword query — matches title, description, company name, and tags
companyNoCompany slug (e.g. 'openai', 'anthropic') — returns only that company's jobs
job_typeNoEmployment type
locationNoLocation substring (e.g. 'San Francisco')
workplaceNoFilter by workplace type
salary_minNoMinimum salary in USD. Matches if job's max salary >= this value (range overlap).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about result ranking ('ranked by quality score and recency') and clarifies that tags use 'ANY' logic, which isn't in the schema. However, it lacks details on permissions, rate limits, pagination, or error handling for a search tool with 8 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences, front-loading the core purpose and then detailing filtering options and result ranking. Every sentence contributes essential information with zero waste, making it highly concise and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no annotations, no output schema), the description is adequate but incomplete. It covers the search scope and ranking but lacks details on output format, error cases, or integration with sibling tools, leaving gaps for an agent to fully understand the tool's behavior in context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds minimal value by listing filter types (tags, workplace, etc.) but doesn't provide additional syntax, format, or semantic details beyond what's in the schema descriptions, aligning with the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for AI/ML engineering roles with specific filtering capabilities, using the verb 'search' and resource 'jobs'. However, it doesn't explicitly differentiate from sibling tools like 'get_job' or 'list_companies', which likely serve different purposes (retrieving single jobs vs listing companies).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_job' or 'list_companies'. It mentions filtering capabilities but doesn't specify prerequisites, limitations, or comparative contexts with sibling tools, leaving the agent without explicit usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources