100Hires
Server Details
Official MCP server for 100Hires ATS: candidates, jobs, applications, interviews, messages.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- 100Hires/mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 130 of 130 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but the high number (130) introduces some ambiguity, e.g., between moves, advances, and transfers, and between batch vs. single operations. Detailed descriptions help, but the sheer volume forces agents to carefully parse names.
All tools follow the 'hires_verb_noun' pattern with underscore separation. Most verbs are consistent (create, get, list, update, delete), but there are minor inconsistencies like 'add' vs 'create' in batch tools and 'cancel' vs 'delete' for messages.
With 130 tools, far exceeds the typical desirable range (3-15). Even for a comprehensive ATS, this is excessive and likely to overwhelm agents and increase selection errors. Many tools are very granular, leading to unnecessary complexity.
The tool surface is remarkably comprehensive, covering CRUD for all major entities (candidates, jobs, applications, interviews, messages, etc.), batch operations, taxonomies, webhooks, and auxiliary features like billing and feedback. No obvious gaps for a full-featured ATS.
Available Tools
130 toolshires_add_candidate_tagsAInspect
Add one or more tags to a candidate. Used for campaign tagging, qualification labels, and source attribution.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). | |
| tags | Yes | Array of tag strings to add. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden; it discloses mutation but nothing about idempotency, errors, or permissions, which is minimal for a simple tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with clear purpose and list of use cases, no redundant information, front-loaded with action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, no annotations; description covers basic function but omits return values, validation details, and error conditions, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds no additional meaning for the parameters beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it adds one or more tags to a candidate, with specific use cases (campaign tagging, qualification labels, source attribution), distinguishing it from sibling tools like hires_batch_add_tags and hires_remove_candidate_tag.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage via use cases but lacks explicit when-to-use or when-not-to-use guidance relative to alternatives like hires_batch_add_tags.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_add_hiring_team_memberAInspect
Add a company member to the job's hiring team. Use in workflow setup and ownership automation.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias | |
| user_id | Yes | User ID to add to the hiring team. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It only says 'Add' which implies mutation, but does not disclose prerequisites, failure conditions, or what happens after addition. Minimal behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences with no wasted words. The action is front-loaded and the usage context is provided in the second sentence. Excellent conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the core action and usage hint, but lacks details like return values, error handling, or prerequisites. For a simple add operation it is acceptable but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. The description adds no extra meaning beyond the schema. Baseline score is appropriate since the schema already documents the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action: 'Add a company member to the job's hiring team.' It uses a specific verb ('Add') and identifies the resource ('hiring team member'), and distinguishes from sibling tools like 'hires_list_hiring_team'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage context: 'Use in workflow setup and ownership automation.' This tells when to use the tool, but does not explicitly mention when not to use or compare with alternatives. The guidance is clear, making it above average.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_advance_applicationAInspect
Advance an application to the next pipeline stage according to workflow order. No stage_id needed -- the system determines the next stage automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. | |
| include | No | Comma-separated relations to embed: candidate, cv.text. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It reveals that the tool automatically advances to the next stage, but does not disclose potential side effects, permissions required, or behavior at final stages.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words, front-loading the core action. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple advancement action without output schema, the description is mostly complete. However, it could benefit from mentioning the return value or what happens at the final stage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents the parameters. The description adds minimal extra meaning beyond confirming the 'id' refers to an application and that no stage_id is needed, which is not a parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'advance' and the resource 'application', and specifies the key distinguishing feature: the system automatically determines the next stage. This differentiates it from siblings like 'hires_move_application' which likely require a stage ID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly notes that no stage_id is needed, implying when to use this tool over alternatives. However, it does not mention when NOT to use it or provide explicit alternatives, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_batch_add_tagsAInspect
Add tags to multiple candidates in one request (max 100). Returns per-item results with partial success support.
| Name | Required | Description | Default |
|---|---|---|---|
| ids | Yes | Candidate IDs to tag (max 100). | |
| tags | Yes | Tag names to attach. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses partial success support and per-item results, but lacks details on authentication, side effects, or error states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no redundancy. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given two simple parameters and no output schema, description fully covers tool behavior, batch limit, and return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes both parameters well (100% coverage). Description adds value by explaining return behavior (per-item results, partial success) beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states action (add tags), resource (multiple candidates), and constraint (max 100). Distinguishes from sibling tools like hires_add_candidate_tags (single candidate) and hires_batch_remove_tags (remove).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Indicates batch use case and limit of 100. Implies alternative for single candidate, but does not explicitly mention when not to use or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_batch_create_messagesAInspect
Create up to 100 scheduled messages in one request. Each item specifies its own candidate_id and message payload. Items are processed independently -- one failure does not stop others. Per-candidate RBAC is enforced for each item.
| Name | Required | Description | Default |
|---|---|---|---|
| messages | Yes | Array of message payloads to create (max 100). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description adds value by disclosing that items are processed independently (one failure does not stop others) and that per-candidate RBAC is enforced. This is beyond the schema information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with key facts (capacity, structure, behavior). No unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (batch operation with many fields) and no output schema, the description covers processing model and RBAC but omits response details, error handling, or what happens on success/failure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds no new semantic information beyond paraphrasing the structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates up to 100 scheduled messages in one request, each with its own candidate_id and payload. This is specific and distinguishes it from single-message tools like hires_send_candidate_message.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It mentions batch creation, max 100 items, independent processing, and per-candidate RBAC. However, it does not explicitly state when to use this tool versus alternatives or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_batch_job_boardsAInspect
Get board publication states for multiple jobs in one request. Optimized for batch monitoring and management UIs.
| Name | Required | Description | Default |
|---|---|---|---|
| jobs | Yes | Array of job IDs |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It indicates a read operation ('Get') but does not disclose potential side effects, authorization requirements, rate limits, or behavior for invalid job IDs. The description is minimal for a batch read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no fluff: the first states the core function, the second adds context about optimization. Information is front-loaded and every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description should explain what 'board publication states' entails (e.g., fields returned). It does not discuss pagination, error conditions, or performance considerations. However, given the simple parameter set, it is minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear 'Array of job IDs' description. The tool description adds no additional meaning beyond the schema, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get board publication states for multiple jobs in one request,' specifying the verb (Get) and resource (board publication states), and distinguishes it from sibling mutation tools like hires_publish_to_job_board and hires_batch_publish_to_boards.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for batch monitoring (optimized for UIs) but does not explicitly state when to use vs alternatives, nor does it mention when not to use this tool. It lacks explicit references to sibling tools like hires_list_job_boards or single-job endpoint alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_batch_move_applicationsAInspect
Move multiple applications to a pipeline stage in one request. Returns per-item results with partial success support. Max 100 application IDs per request.
| Name | Required | Description | Default |
|---|---|---|---|
| ids | Yes | Application IDs to move (max 100). | |
| stage_id | Yes | Target pipeline stage ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses partial success support and per-item results, which adds transparency beyond the schema. However, it does not detail error handling or permissions, which are typical behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no unnecessary words. Key information (batch move, partial success, max limit) is front-loaded and each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and only two parameters, the description covers the batch nature, partial success, and max limit. It is fairly complete but could mention transactional guarantees or error behavior for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Parameter schema coverage is 100%, so baseline is 3. The description adds the constraint 'Max 100 application IDs per request' for the ids parameter, which is not in the schema definition, providing additional semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Move' and the resource 'multiple applications to a pipeline stage' in one request. It distinguishes from sibling tools like 'hires_move_application' by indicating batch operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies batch usage vs. single moves and specifies a max limit of 100 application IDs per request. It does not explicitly state when not to use or compare to alternatives, but the context is sufficient for selecting the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_batch_publish_to_boardsAInspect
Activate board publication for multiple jobs in one request. Use for bulk job distribution workflows.
| Name | Required | Description | Default |
|---|---|---|---|
| jobs | Yes | Array of job IDs to publish | |
| boards | No | Array of board identifiers to activate (e.g. ['indeed', 'linkedin']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It lacks details on idempotency, failure handling, permissions, or side effects. For a batch mutation, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with core purpose, no redundant information. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so the description should explain return values or error handling. It does not mention what the response contains, especially for batch operations where partial failures are relevant.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers 100% of parameters with descriptions. The description adds no additional meaning beyond what is in the schema, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it activates board publication for multiple jobs in one request, and explicitly mentions 'bulk job distribution workflows,' distinguishing it from single-job sibling tools like hires_publish_to_job_board.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for bulk workflows, which helps differentiate from single operations. However, it does not explicitly state when not to use this tool or list alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_batch_reject_applicationsAInspect
Reject multiple applications in one request with an optional rejection reason. Returns per-item results with partial success support. Max 100 application IDs per request.
| Name | Required | Description | Default |
|---|---|---|---|
| ids | Yes | Application IDs to reject (max 100). | |
| rejection_reason_id | No | Optional rejection reason ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description discloses partial success per-item results and request limit. Adequately informs agent of behavior beyond basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with all necessary info. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description covers return format (per-item results, partial success). Limit is mentioned. Sufficient for a batch rejection tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
100% schema coverage; description restates 'max 100' and 'optional rejection reason' but adds little beyond schema. Baseline score justified.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Reject multiple applications' with specifics like optional reason and limit. Distinguishes from sibling hires_reject_application (single) and other batch operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage: for batch rejection. Mentions partial success and max 100, guiding when to use. Lacks explicit exclusion of single reject but still clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_batch_remove_from_boardsAInspect
Deactivate board publication for multiple jobs in one request. Use for bulk depublishing workflows.
| Name | Required | Description | Default |
|---|---|---|---|
| jobs | Yes | Array of job IDs to depublish | |
| boards | No | Array of board identifiers to deactivate (e.g. ['indeed', 'linkedin']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description lacks behavioral details beyond the core action. It does not disclose side effects, auth requirements, reversibility, or error conditions. With no annotations provided, the description carries the full burden but fails to address these aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, using two clear sentences with no unnecessary words. It efficiently conveys the purpose and usage context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 2 parameters, no output schema, and no annotations, the description is too brief. It does not explain what 'deactivate board publication' entails (e.g., specific boards or all), response expectations, or potential errors, leaving gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage with descriptions for both parameters. The description does not add additional meaning beyond the schema, so the baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Deactivate board publication for multiple jobs in one request' which specifies the action, resource, and batch nature. It also explicitly mentions 'Use for bulk depublishing workflows,' distinguishing from sibling tools like single-remove or publish tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates 'Use for bulk depublishing workflows,' providing context for appropriate use. However, it does not explicitly mention when not to use it (e.g., for single job removal) or alternatives like the single-remove tool 'hires_remove_from_job_board'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_batch_remove_tagsAInspect
Remove tags from multiple candidates in one request (max 100). Returns per-item results with partial success support.
| Name | Required | Description | Default |
|---|---|---|---|
| ids | Yes | Candidate IDs to remove tags from (max 100). | |
| tags | Yes | Tag names to remove. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavior. It accurately indicates a destructive operation ('Remove tags'), specifies a limit (100), and explains the response format with partial success support. Lacks details on authentication or idempotency but is sufficient for a straightforward removal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states the action and limit, second describes the response. Every word adds value with no redundancy or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the return format ('Returns per-item results with partial success support'). It does not mention edge cases like idempotency but is sufficient for a batch removal tool with clear schema descriptions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters ('Candidate IDs to remove tags from (max 100).' and 'Tag names to remove.'). The description adds context about per-item results but does not elaborate on parameter usage beyond the schema, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'Remove', the resource 'tags', and the scope 'multiple candidates' with a specific constraint 'max 100'. It distinguishes from siblings like 'hires_remove_candidate_tag' (singular) and 'hires_batch_add_tags' (adds tags instead).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies batch use with 'one request (max 100)' and mentions partial success, but does not explicitly contrast with the singular tool 'hires_remove_candidate_tag' or state when not to use. Still, the context is clear for a batch operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_cancel_all_notification_messagesAInspect
Cancel all scheduled notification emails for a candidate. Already sent notifications are not affected. Returns success even if no scheduled notifications exist.
| Name | Required | Description | Default |
|---|---|---|---|
| candidate_id | Yes | Candidate ID (numeric) or alias. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries burden and discloses key behaviors: only affects scheduled (not sent), returns success even if none exist. Lacks details on permissions or side effects, but adequate for a cancellation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three succinct sentences: action, scope, and idempotency. No redundant information, front-loaded with core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple one-param tool with no output schema, description fully covers what the tool does, its scope, and edge case (no scheduled notifications). No additional context needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter description. Description does not add extra meaning beyond specifying the action on candidate. Baseline 3 applies as schema already documents the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it cancels all scheduled notification emails for a candidate, using specific verb and resource. Distinguishes from siblings like delete_notification_message by specifying 'all scheduled' and from get/update by cancel action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for canceling scheduled notifications, stating already sent are not affected. However, no explicit guidance on when to use this vs alternatives like delete_notification_message, and no 'when-not' or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_applicationAInspect
Create an application by linking an existing candidate to a job. Use for sourcing workflows and manual application ingestion. The candidate must already exist.
| Name | Required | Description | Default |
|---|---|---|---|
| cv | No | Optional CV/resume to attach. | |
| job_id | Yes | Job ID to apply the candidate to. | |
| include | No | Comma-separated relations to embed: candidate, cv.text. | |
| stage_id | No | Pipeline stage ID. If omitted, defaults to the first stage. | |
| candidate_id | Yes | Candidate ID (numeric) or alias. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It only mentions that the tool links a candidate to a job and requires the candidate to exist. It does not mention error handling (e.g., if the candidate or job doesn't exist), auth requirements, rate limits, side effects like notifications, or what happens with optional parameters like cv or stage_id.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long, all of which are substantive. It introduces the tool, states its use cases, and provides a necessary prerequisite. No fluff or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 5 parameters, one nested object, and no output schema. The description covers the main purpose and prerequisite but lacks details on return values, error conditions, or interaction with other tools. For a tool of this complexity, it is adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description does not add any additional meaning beyond the schema; it merely states the overall purpose. The schema already explains each parameter, so the description provides no extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'create', the resource 'application', and specifies that it links an existing candidate to a job. It also provides context for use cases like sourcing workflows and manual ingestion. This clearly distinguishes it from sibling tools like hires_create_candidate or hires_submit_career_application.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions when to use this tool: 'for sourcing workflows and manual application ingestion', and includes a prerequisite: 'The candidate must already exist.' However, it does not explicitly state when not to use it or list alternative tools for other scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_candidateAInspect
Create a new candidate profile. Optionally link to a job/stage and attach a CV. Used for imports, inbound forms, and enrichment workflows.
| Name | Required | Description | Default |
|---|---|---|---|
| cv | No | CV/resume file to attach (base64 payload). | |
| No | Candidate email address. Used for deduplication. | ||
| phone | No | Candidate phone number. | |
| job_id | No | Job ID to create an application for this candidate. | |
| profile | No | Key-value map of profile field answers. Keys can be question text or question_id. Example: {"Current job title": "Senior Engineer"}. | |
| stage_id | No | Pipeline stage ID for the initial application. Requires job_id. | |
| last_name | No | Candidate last name. | |
| company_id | No | Target company ID. Required only when the API key has access to multiple companies. | |
| first_name | No | Candidate first name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the behavioral disclosure burden. It correctly indicates the action is a creation (write operation) with optional parameters. However, it does not elaborate on side effects, deduplication logic (hinted in email param), authentication needs, or what happens on success/failure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences with no redundancy. It is front-loaded with the primary action, followed by key options and usage scenarios. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 9 optional parameters and no output schema, the description effectively covers the main functionality and typical use cases. It might omit the return value (likely the created candidate object), but for a creation tool this is acceptable. It provides sufficient context for the agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all parameters. The description adds high-level context by summarizing optional linking and CV attachment, but does not provide deeper meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('Create'), the resource ('candidate profile'), and optional linking to job/stage and CV attachment. It also lists specific use cases (imports, inbound forms, enrichment workflows), distinguishing it from siblings like hires_update_candidate or hires_create_application.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage contexts ('Used for imports, inbound forms, and enrichment workflows'), helping the agent decide when to invoke it. However, it does not explicitly state when not to use or contrast with alternatives such as hires_submit_career_application.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_companyAInspect
Create a client company and link ownership context. Typical entrypoint for multi-tenant onboarding.
| Name | Required | Description | Default |
|---|---|---|---|
| url | No | Company profile URL | |
| logo | No | Company logo file | |
| name | Yes | Company name | |
| website | No | Company website URL | |
| company_owner_name | Yes | Company owner full name | |
| is_staffing_agency | No | Whether this company is a staffing agency | |
| company_owner_email | Yes | Company owner email address | |
| company_owner_phone | No | Company owner phone number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description only mentions creation and ownership linking. Lacks details on side effects, permissions, rate limits, or failure modes. Insufficient for full transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. Front-loads purpose. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate but lacks return value description (no output schema) and prerequisites. Sufficient for basic understanding but incomplete for complex scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline 3. Description adds no parameter-specific information beyond schema. No improvement needed but also no extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Create a client company and link ownership context', using verb 'create' and resource 'company'. Distinguishes from siblings like 'hires_update_company' and 'hires_delete_company'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes as 'Typical entrypoint for multi-tenant onboarding', giving context on when to use. Lacks explicit when-not-to-use or alternatives, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_email_templateAInspect
Create a new email template with name, subject, and body. Subject and body support placeholders like {{first_name}}, {{job_title}}. To embed placeholders: 1) GET /template-placeholders to list them, 2) POST /template-placeholders/prepare to get the HTML tag, 3) insert the tag into the body.
| Name | Required | Description | Default |
|---|---|---|---|
| body | Yes | Email body HTML (supports placeholders) | |
| name | Yes | Template name | |
| subject | Yes | Email subject line (supports placeholders like {{first_name}}, {{job_title}}) | |
| company_id | No | Target company ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full burden. It mentions creating and placeholder support but does not disclose side effects, authentication requirements, rate limits, or error conditions. The placeholder embedding steps add some context but insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two clear sentences followed by a numbered step list. Purpose is front-loaded; no fluff. Every sentence adds value, particularly the placeholder embedding guide.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers creation and placeholder embedding well. However, it lacks return value information (no output schema), and does not address the optional company_id parameter. With moderate complexity, more completeness is expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so parameters already well-documented. Description adds value by listing placeholder examples and providing embedding steps. However, it does not mention the optional company_id parameter, so marginal improvement over schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Create a new email template with name, subject, and body,' using a specific verb and resource. It distinguishes from siblings like update, get, delete, and list by implying creation exists for new templates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implicitly it is for creating new templates, and it provides procedural steps for embedding placeholders, which is helpful. However, it lacks explicit guidance on when to use this tool vs alternatives like update or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_formBInspect
Create a new application form, optionally attaching existing questions by ID.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Form name. | |
| questions | No | Array of question IDs to attach to this form. | |
| company_id | No | Target company ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It states the basic action but omits details on side effects, permissions, or what happens to the form after creation. This is insufficient for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, front-loaded with the primary action. It is concise and to the point, but could include more detail without losing brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description lacks information about return values, prerequisites (e.g., company_id), or confirmation of success. It is minimally functional but incomplete for an agent to use confidently.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds little beyond the schema. The mention of 'optionally attaching existing questions by ID' clarifies the relation to the 'questions' parameter, but overall value is marginal.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Create') and the resource ('new application form'), and mentions the optional attachment of questions. This distinguishes it from sibling tools like hires_update_form and hires_create_question.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies that questions should exist before attaching (by ID), but does not provide explicit guidance on when to use this tool versus alternatives like hires_create_question or hires_update_form.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_interviewAInspect
Schedule a new interview for an application. Provide start/end times as Unix timestamps and a list of interviewer user IDs. Location is resolved to an existing record or created automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. | |
| include | No | Comma-separated relations to embed: candidate, application, job. | |
| end_time | Yes | Interview end time as Unix timestamp (seconds, must be after start_time). | |
| location | No | Location string; resolved to existing record or created automatically. | |
| start_time | Yes | Interview start time as Unix timestamp (seconds). | |
| interviewer_ids | Yes | List of user IDs who will conduct the interview. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must cover behavioral traits. It mentions that location is 'resolved to an existing record or created automatically,' which is useful. However, it does not disclose side effects, permission requirements, or success/error responses, which would be beneficial.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, each adding value. The first sentence states the purpose, and the second provides key parameter context. It is concise and front-loaded, though slightly more structure (e.g., listing required params) could improve scannability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 6 parameters (4 required) and no output schema, the description adequately covers the input but omits return value information. An agent would benefit from knowing what the tool returns (e.g., the created interview object).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are already described. The description adds value by clarifying that times are Unix timestamps and explaining location resolution behavior. This reduces ambiguity beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Schedule a new interview for an application') and the resource ('interview for an application'). It uses a specific verb ('schedule') and distinguishes itself from sibling tools like hires_list_interviews or hires_get_interview by focusing on creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what to provide (start/end times, interviewer IDs) but does not explicitly state when to use this tool versus alternatives (e.g., updating an interview or submitting feedback). No exclusion criteria or prerequisites are mentioned, leaving some ambiguity for agents.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_jobAInspect
Create a job with taxonomy, location, salary, and workflow configuration. Primary endpoint for programmatic job publishing. Required fields: status, title, description, location_city, location_country.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Public job title. | |
| status | Yes | Job status (e.g. Draft, Public). See GET /taxonomy/statuses. | |
| form_id | No | Application form ID. If omitted, a new form named after the job title is created with default questions. | |
| include | No | Comma-separated related resources to embed: workflow, hiring_team, pipeline_stages | |
| is_remote | No | Whether this is a remote position. | |
| company_id | No | Target company ID. Required only when the API key has access to multiple companies. | |
| salary_max | No | Maximum salary. | |
| salary_min | No | Minimum salary. | |
| category_id | No | Job category ID from GET /taxonomy/categories. | |
| description | Yes | Job description (HTML allowed). | |
| workflow_id | No | Workflow ID. If omitted, a new workflow named after the job title is created with default stages. | |
| department_id | No | Department ID from GET /taxonomy/departments. | |
| location_city | Yes | Job city. | |
| parent_job_id | No | Canonical parent job ID. If provided, the created job becomes a satellite job. | |
| salary_period | No | Salary period. | |
| internal_title | No | Internal-only title visible to the hiring team. | |
| location_state | No | Job state or region. | |
| internal_job_id | No | External reference ID from your ATS or HR system. | |
| salary_currency | No | Salary currency code (e.g. USD, EUR). | |
| location_country | Yes | Job country. | |
| education_level_id | No | Education level ID from GET /taxonomy/education-levels. | |
| employment_type_id | No | Employment type ID from GET /taxonomy/employment-types. | |
| knockout_questions | No | Boolean knockout questions added to the application form. | |
| experience_level_id | No | Experience level ID from GET /taxonomy/experience-levels. | |
| resume_field_status | No | Resume field behavior on the application form. | |
| location_postal_code | No | Postal or ZIP code. | |
| location_full_address | No | Full formatted address. | |
| location_street_address | No | Street address. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behaviors: creation of new form/workflow if omitted, satellite job creation via parent_job_id. It implies mutation (creation) and does not contradict any annotations (none provided). However, it does not detail all potential side effects like API rate limits or response structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first sentence clearly states purpose and scope, second lists required fields. No redundant or extraneous information. Front-loaded with key purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex schema (28 params) with full descriptions, the description covers core creation behaviors (default form/workflow, satellite jobs). Lacks error handling or return value info, but no output schema exists to elaborate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds minimal extra value beyond listing required fields and mentioning categories. At high coverage, baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Create a job') and resource ('job') with clear scope ('taxonomy, location, salary, and workflow configuration'). It also states it's the 'primary endpoint for programmatic job publishing,' distinguishing it from update tools like hires_update_job.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists required fields, providing basic guidance, but does not explicitly state when to use this tool vs alternatives (e.g., hires_update_job) or when not to use it. No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_job_webhookAInspect
Register a webhook URL for job-related events. Core step for outbound integration setup. URL must be HTTPS.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias | |
| url | Yes | Webhook destination URL. Must be HTTPS. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It only adds the HTTPS constraint for the URL, but does not disclose behavioral traits like permissions required, idempotency, response format, or error handling. As a creation tool, destructive implications are implied but not stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences. First states purpose, second adds context, third adds a constraint. Every sentence is valuable with no fluff. Well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description is adequate for a simple tool: it explains the resource, a key constraint, and integration context. However, it lacks details on what specific job-related events trigger the webhook, response format, or prerequisite setup, leaving some gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description repeats the HTTPS requirement already in the schema's url property description, adding no new parameter meaning. The id parameter's description in the schema ('Job ID (numeric) or alias') is not supplemented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool registers a webhook URL for job-related events, specifying the verb 'register' and the resource 'webhook URL for job-related events'. It distinguishes itself from the sibling 'hires_create_webhook' (likely for general events) and adds context as a core step for outbound integration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context ('Core step for outbound integration setup') but does not explicitly mention when to use this tool versus alternatives like 'hires_create_webhook' or other webhook-related tools. It lacks when-not or alternative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_noteAInspect
Create a discussion note for a candidate. Supports visibility control (all or private) and @mentions with email notifications.
| Name | Required | Description | Default |
|---|---|---|---|
| body | Yes | Note content. Supports HTML. | |
| include | No | Include related resources, e.g. 'user' for author details | |
| user_id | No | Author user ID. If omitted, the authenticated user is used | |
| visibility | No | Visibility: 'all' (default) or 'private' | |
| candidate_id | Yes | Candidate ID (numeric) or alias | |
| mention_user_ids | No | Array of user IDs to mention. Mentioned users receive email notifications. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description mentions visibility and mentions causing email notifications, but lacks details on authorization needs, side effects, or rate limits. Adequate for a basic create tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler. First sentence states purpose, second adds key features. Efficiently sized and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers core purpose and key features, but lacks guidance on include parameter, default user_id, candidate_id format, and return value. Minimal viable but with gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, description adds context on visibility values and mentions triggering emails, but doesn't explain include or user_id defaults. Adds value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states action (create), resource (discussion note for candidate), and key features (visibility control, @mentions with email). Distinguishes from siblings like hires_create_candidate or hires_create_application.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for creating notes with visibility and mentions, but no explicit when-to-use vs alternatives (e.g., hires_update_note, hires_list_notes) or conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_nurture_campaignAInspect
Create a nurture campaign with steps. Steps are executed sequentially; each step has a type (email, sms, voicemail, move_to_next_stage, assign_tag, assign_task) with type-specific fields. Optionally bind to a workflow stage.
| Name | Required | Description | Default |
|---|---|---|---|
| steps | Yes | Campaign steps (at least one required) | |
| title | Yes | Campaign name | |
| stage_id | No | Stage ID that triggers the campaign | |
| timezone | No | IANA timezone, e.g. "America/New_York" | |
| company_id | No | Target company ID (optional if API key is scoped to one company) | |
| delay_time | No | Delay time in seconds | |
| send_to_all | No | Send to all candidates or only new ones (default false) | |
| workflow_id | No | Workflow ID to bind the campaign to | |
| relative_days | No | Relative days for schedule | |
| relative_time | No | Relative time for schedule (seconds from midnight) | |
| response_move_to_stage_id | No | Stage to move candidate to when they reply |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that steps execute sequentially and have type-specific fields, but omits details on side effects, idempotency, or returned data. The description does not contradict annotations (none present).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences succinctly convey the tool's purpose and key structural details. While the second sentence lists step types compactly, it is efficient and front-loaded with the main action ('Create a nurture campaign'). Slightly denser than ideal, but still concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has complex nested inputs (steps with anyOf) and no output schema. The description explains sequential execution but does not mention return values (e.g., campaign ID) or reference related tools like upload_attachment for voicemail steps, leaving gaps for a complete understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining that steps run sequentially and that workflow binding is optional, which is not in the schema. This goes beyond simply restating field names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a nurture campaign with steps, enumerates step types, and mentions optional workflow stage binding. This differentiates it from sibling tools like delete, get, list, and update.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives, no prerequisites, and no exclusions. The only implicit context is that it creates campaigns, but there is no mention of scenarios or alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_questionAInspect
Create a reusable question with optional answer options for dropdown types. Used by forms and questionnaires.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Question text | |
| type | Yes | Question type (from hires_list_question_types) | |
| options | No | Answer options (for select/multiselect question types) | |
| company_id | No | Target company ID (uses default company when omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but only states that answer options are for dropdown types. It does not disclose permissions, idempotency, duplicate handling, or side effects. Minimal behavioral context is provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Front-loaded with verb and resource. Highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a creation tool, but lacks explanation of return value (e.g., the created question object). Given no output schema and no annotations, the description could be more complete by mentioning what the response contains.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds meaning by clarifying that 'options' are only for dropdown types and emphasizing reusability. It goes beyond schema descriptions, though it could elaborate on company_id default behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Create a reusable question' with the verb 'create' and resource 'reusable question'. It specifies optional answer options for dropdown types, distinguishing it from sibling tools like update or delete. Additional context ('Used by forms and questionnaires') adds purpose clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for forms and questionnaires but does not provide explicit guidance on when to use this tool vs alternatives (e.g., one-time form creation). No exclusions or comparisons with siblings are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_create_webhookAInspect
Create a company-scoped webhook subscription. Use for outbound company-level event integrations.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Company ID | |
| url | Yes | Webhook destination URL. Must be HTTPS. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description lacks details on side effects, authentication requirements, rate limits, or whether the operation is idempotent. For a create operation, more behavioral context is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loaded with the core action, and contains no unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 2 parameters, no output schema, and no annotations, the description is minimalist. It does not specify return values or post-creation steps (e.g., verification), which may be important for webhook setup.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are already documented. The description adds 'company-scoped' aligning with the 'id' parameter, but adds little additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (create) and the resource (company-scoped webhook subscription). It also distinguishes from sibling tools like hires_create_job_webhook by specifying 'company-scoped', making purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates usage for 'outbound company-level event integrations', providing context. However, it does not explicitly state when not to use or mention alternatives like hires_create_job_webhook for job-level events, leaving room for improvement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_applicationAInspect
Permanently delete an application. This removes it from all list and view queries.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It states permanence and effect on queries, but does not mention potential side effects like cascading deletion of notes or attachments, which limits transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short, front-loaded sentences efficiently convey action and effect with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple delete tool with one parameter and no output schema, the description adequately covers purpose and effect. However, it could mention potential dependencies or error conditions for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds no additional meaning to the 'id' parameter beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool permanently deletes an application and removes it from list and view queries. The verb 'delete' and resource 'application' are specific, and the effect distinguishes it from other tools like reject or transfer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this is for permanent removal, contrasting with less destructive actions like reject, but does not explicitly list alternatives or provide when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_candidateBInspect
Permanently delete a candidate by ID or alias.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description says 'permanently delete' which implies irreversibility, but lacks details on side effects (cascading deletions), permissions needed, or error conditions. No annotations exist to fill the gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, front-loaded with the key action, and every word is necessary. No extraneous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-param tool, the description covers the core function but omits important context like irreversibility warning, permission requirements, or what happens post-deletion. Adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema fully describes the parameter (id can be integer or string, representing ID or alias). The description adds no extra semantics beyond the schema. With 100% schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'permanently delete' and the resource 'a candidate', and specifies identification by ID or alias. It distinctly separates from sibling delete tools targeting other entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., disqualify or archive), nor does it mention irreversibility or prerequisites. The description is purely functional.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_companyBInspect
Delete a company. Use for lifecycle control in partner tenancy management.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Company ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must bear the full burden. It only says 'Delete,' which implies destruction, but doesn't disclose consequences, reversibility (though a restore tool exists), permissions, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences with no wasted words. It front-loads the primary purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations or output schema, the description should provide more context, such as irreversibility, dependencies, or the existence of a restore tool. It is incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already describes the 'id' parameter as 'Company ID.' The description adds minimal meaning beyond that. With 100% schema coverage, a score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Delete a company' clearly, with a specific verb and resource. It adds context about lifecycle control in partner tenancy management, distinguishing it from other delete tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context ('lifecycle control in partner tenancy management') but does not explicitly state when to use or avoid this tool, nor does it mention alternatives like the restore tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_email_templateAInspect
Soft-delete an email template. Templates already used in automations will stop being available for new actions.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Email template ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses the soft-delete nature and impact on automations, which is helpful. But it lacks details on reversibility, permission needs, or side effects on existing automations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, no wasted words. The first sentence states the action, the second adds critical behavioral context, making it efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has only one parameter and no output schema. The description is sufficient but could be improved by mentioning the response (e.g., success indication) or any preconditions, given the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a description for the 'id' parameter. The tool description does not add any extra meaning beyond the schema, so it meets the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Soft-delete an email template', specifying the action (soft-delete) and the resource (email template). This uniquely identifies its purpose among sibling tools like create, get, update, and list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions that templates used in automations will stop being available for new actions, implying when the tool is appropriate. However, it does not explicitly state when to avoid using it or provide alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_formCInspect
Delete an application form.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Form ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, and the description fails to disclose critical behavioral traits such as irreversibility, associated data effects, or permission requirements. A delete operation should explicitly state whether the action is permanent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise, using a single sentence. It is front-loaded with the key action and resource. While minimal, it is appropriately sized for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (one parameter, no output schema), the description is adequate for stating the core action. However, it lacks essential behavioral context for a delete operation, such as irreversibility and side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the only parameter 'id' is described as 'Form ID.' The description adds no additional meaning or constraints beyond the schema, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (delete) and the resource (application form). It is specific but does not differentiate from sibling delete tools beyond the resource name, which is already in the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like hires_delete_application or hires_delete_candidate. The description lacks any context about prerequisites or use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_jobBInspect
Delete a job. Use to align archived/removed positions across integrated platforms.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description only says 'Delete a job.' It fails to disclose critical behavioral traits such as irreversibility, effects on related data (candidates, applications), required permissions, or confirmation behavior. The description carries full burden but provides minimal insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words. Immediately front-loads the action and then clarifies the use case.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should include more context: return value, irreversibility, effects on linked entities, etc. It is inadequate for a delete operation that likely has side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (1 parameter 'id' with description 'Job ID (numeric) or alias'). The description adds no additional meaning beyond the schema. Baseline of 3 is appropriate since schema covers it well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it deletes a job (specific verb and resource). The second sentence 'Use to align archived/removed positions across integrated platforms' adds a concrete use case, distinguishing it from related tools like update_job or set_job_status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a usage context ('align archived/removed positions...') but does not explicitly say when not to use it or mention alternative tools (e.g., set_job_status). No sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_job_webhookAInspect
Delete a job webhook subscription by ID. Use for cleanup, rotation, and endpoint migration.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias | |
| webhook_id | Yes | Webhook ID to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry full burden. It only states the destructive action without detailing permissions, irreversibility, or side effects. Lacks behavioral context beyond the basic delete operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences, clear and front-loaded with the action. No unnecessary words, and the structure efficiently conveys the purpose and usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool and full schema coverage, the description is fairly complete. It clearly identifies the resource as job-specific and lists usage contexts. However, it could mention that this is separate from the general webhook deletion tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add any additional meaning to the parameters beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Delete), the resource (job webhook subscription), and the method (by ID). It distinguishes from siblings like hires_delete_webhook by specifying 'job' webhook.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists use cases: cleanup, rotation, and endpoint migration. Provides clear context for when to use this tool, though it does not mention when not to use it or compare to alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_messageBInspect
Cancel a scheduled message before it is processed by the mailbox scheduler.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Message ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It discloses the timing constraint ('before processed') but fails to mention authorization, idempotency, or error states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no fluff, but lacks structured formatting. Efficient for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple tool with one parameter and no output schema. Describes core action and a key constraint, but omits post-cancellation effects or confirmation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description adds no extra meaning beyond the schema's 'Message ID.' The baseline of 3 applies as no additional semantic value is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Cancel' and the resource 'scheduled message', distinguishing it from siblings like 'hires_delete_notification_message' and 'hires_send_candidate_message'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (cancel scheduled messages before processing) but provides no explicit guidance on when not to use or alternatives like 'hires_patch_message'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_noteBInspect
Delete a note. Use for moderation policies and data cleanup operations.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Note ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility. It only states 'Delete' without clarifying whether the deletion is permanent or reversible, what permissions are required, or any side effects. This is a significant gap for a destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences and zero wasted words. It front-loads the core purpose and adds context efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should provide more behavioral details about deletion (e.g., irreversibility, permissions, effect on related data). It is too minimal for a delete operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single parameter 'id' (note ID). The description adds no extra meaning beyond what the schema already provides, meeting the baseline but not exceeding it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and resource ('a note'), making the tool's purpose immediately understandable. However, it does not differentiate from other delete tools among siblings, like hires_delete_application or hires_delete_candidate, which share similar structure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides two specific use cases ('moderation policies and data cleanup operations'), giving some context. But it lacks guidance on when not to use this tool, prerequisites (e.g., note ownership), or alternatives (e.g., updating a note instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_notification_messageAInspect
Cancel a scheduled notification email before it is sent. Already sent messages cannot be canceled.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Notification email message ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden. It discloses key behavioral traits: cancellation only possible before sending, and sent messages cannot be canceled. This prevents confusion but lacks further detail on side effects or permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two front-loaded sentences. Every word is informative, and no redundant information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple nature of the tool (single parameter, no output schema), the description is complete. It provides the essential constraint (unsent only) and the action, which is sufficient for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'id' is already well-described in the schema ('Notification email message ID.'). The description adds no additional meaning beyond what the schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (cancel), the resource (notification email), and the condition (before it is sent). It distinguishes itself from sibling delete tools by specifying 'notification email' and the 'scheduled' nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (unsent emails) but does not provide explicit alternatives or when not to use (e.g., preferring other delete/cancel tools). It lacks differentiation from siblings like 'hires_delete_message'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_nurture_campaignAInspect
Delete (soft-delete) a nurture campaign. Active campaign executions will be stopped.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Nurture campaign ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions 'soft-delete' and 'active campaign executions will be stopped', which discloses key behavioral traits. However, without annotations, it fails to explain whether the data is recoverable, if permissions are required, or any side effects like logged entries. The transparency is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, containing only two sentences that front-load the core purpose and a key behavioral detail. Every word serves a purpose, with no redundancy or unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (1 parameter, no output schema, no annotations), the description adequately covers the main action and a critical behavioral nuance (stopping active executions). It is sufficiently complete for an agent to understand what the tool does, although it could mention the return value or confirmation behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers the single 'id' parameter with a description 'Nurture campaign ID', resulting in 100% schema coverage. The description adds no additional meaning beyond the schema, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Delete (soft-delete) a nurture campaign', specifying the verb 'delete' and the resource 'nurture campaign'. It also adds 'Active campaign executions will be stopped' to further clarify the action's effect, distinguishing it from other delete tools for different entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, such as updating the campaign status or using a hard delete if available. There are no explicit when-to-use or when-not-to-use instructions, leaving the agent to infer usage without context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_questionAInspect
Delete a reusable question from the catalog. Use cautiously when deprecating question banks.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Question ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It indicates destruction ('Delete') and adds caution, but does not specify if deletion is soft or hard, what permissions are needed, or any cascading effects on associated data. The caution implies irreversibility, but more detail would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is direct and contains no unnecessary words. It efficiently conveys the core action and a cautionary note.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple delete operation with one parameter and no output schema, the description covers the essential purpose and usage context. It lacks details on return values or error handling, but these are less critical for a straightforward deletion. The caution adds sufficient completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the only parameter 'id' is described as 'Question ID'). The description adds no additional semantic information about the parameter beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('reusable question from the catalog'). Among siblings like 'hires_create_question' and 'hires_update_question', it distinguishes itself by specifying deletion and the catalog context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises using cautiously when deprecating question banks, providing context for appropriate usage. However, it does not explicitly state when not to use the tool or mention alternative tools like 'hires_update_question' for disabling questions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_delete_webhookAInspect
Delete a company-scoped webhook subscription by ID. Use for endpoint retirement and security rotation.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Company ID | |
| webhook_id | Yes | Webhook ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the deletion action but lacks explicit warnings about irreversibility or side effects. The phrase 'endpoint retirement and security rotation' hints at deliberate use but does not fully disclose destructive nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that are front-loaded with the action and purpose. Every sentence adds value; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description is minimally complete for a delete operation. It lacks details on return values, error conditions, or idempotency, but is sufficient for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers both parameters with clear descriptions ('Company ID', 'Webhook ID'). Schema coverage is 100%, so the description adds no additional meaning. Baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete'), the resource ('company-scoped webhook subscription'), and the method ('by ID'). It distinguishes from sibling tools like 'hires_create_webhook' or 'hires_delete_job_webhook' by specifying scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases ('endpoint retirement and security rotation'). It does not contrast with alternatives like 'hires_delete_job_webhook', but the scope ('company-scoped') indirectly guides correct usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_disqualify_candidateBInspect
Disqualify a candidate from all active applications. Optionally provide rejection reason IDs. Returns affected application IDs.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). | |
| reasons | No | Array of rejection reason IDs from GET /taxonomy/rejection-reasons. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that the tool disqualifies from all active applications and returns affected IDs. However, it omits behavioral traits like reversibility, permission requirements, or side effects on candidate status.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently conveys the key information. It could be slightly more informative but is not verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no output schema, the description covers the return value but lacks detail on the format of affected IDs, confirmation steps, or error conditions. It is adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-documented. The description adds value by explaining that 'reasons' are optional and from GET /taxonomy/rejection-reasons, but does not elaborate on 'id' beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'disqualify', the resource 'candidate', scope 'all active applications', and optional rejection reason IDs. It also specifies the output (affected application IDs), distinguishing it from per-application sibling tools like hires_reject_application.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as hires_batch_reject_applications or hires_reject_application. It does not mention prerequisites (e.g., candidate must have active applications) or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_download_attachmentAInspect
Download an attachment (resume, candidate file, application file, mail attachment, call recording). Pass the absolute URL returned by another endpoint (e.g. message.attachments[].url, cv.url, resume.url) — it MUST belong to the configured 100Hires API host; other hosts are rejected to avoid leaking the Bearer token. Returns {file_name, mime_type, size, data} where data is base64-encoded bytes. Files larger than 25 MB are rejected up-front (Content-Length check / streaming abort) without being loaded into memory.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Absolute attachment URL returned by another API response (e.g. https://api.100hires.com/v2/attachments/mail_attachment/<uuid>/<file_name>). Must match the API host. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses return format ({file_name, mime_type, size, data}), base64 encoding, upfront rejection of files >25 MB without loading into memory, and security measure of rejecting other hosts to prevent token leakage. No annotations provided, so description does full job.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no fluff. Core action is front-loaded, and additional details are efficient. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers key aspects: usage, return shape, size limit, security. No output schema, so return format is explained. Minor missing details like error conditions, but overall complete for a single-parameter download tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds meaning beyond schema: explains URL must be absolute and from specific endpoints (e.g., message.attachments[].url), and must match API host. Provides context for source and security, despite 100% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Download an attachment' and lists specific types (resume, candidate file, application file, mail attachment, call recording). It also specifies using absolute URL from another endpoint, distinguishing it from upload/list siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use: pass absolute URL from another endpoint, and warns that URL must belong to the API host. Implicitly separates from list endpoints, but no explicit mention of when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_ai_scoreAInspect
Get the structured AI score for an application, including per-criterion scores, justifications, and follow-up questions. Returns null score if the application has not been AI-scored.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses the null return for unscored applications, which is a key behavioral trait. However, it does not cover auth requirements, rate limits, or potential errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no unnecessary words. It efficiently communicates purpose, contents, and a key behavioral edge case.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately outlines return value structure and null case. It lacks details on exact output format or error handling, but for a simple getter with one parameter, it is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the parameter 'id' has a clear description in the schema. The tool description adds no extra meaning beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves the structured AI score for an application, listing specific components (per-criterion scores, justifications, follow-up questions). It also clarifies behavior when not scored (returns null). This distinguishes it from sibling tools like hires_get_application.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing AI score data, but lacks explicit guidance on when to prefer this over broader tools like hires_get_application. No mention of prerequisites or exclusions, but the specificity makes the context clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_applicationAInspect
Get full application details including stage, status, and rejection context. Recommended before mutating stage transitions.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. | |
| include | No | Comma-separated relations to embed: candidate, cv.text. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that the tool returns stage, status, and rejection context, but does not detail other behavioral aspects like rate limits, authentication, or side effects. Sufficient for a read-only tool but lacks richness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with key action. Every sentence adds value: the first states purpose and content, the second provides usage guidance. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description must convey return value. It mentions stage, status, and rejection context but does not specify if the full application object is returned or list other fields. Could be more complete regarding error conditions or authentication, but adequate for a simple get-by-id tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds no new parameter-specific information; the included fields mentioned (stage, status, rejection context) are general, not parameter-level. The include parameter's purpose is fully described in schema, so description adds minimal value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('get') and resource ('full application details'), and lists key fields (stage, status, rejection context). It distinguishes from sibling tools like hires_advance_application or hires_list_applications by recommending use before mutations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Recommended before mutating stage transitions,' providing clear context for when to use this tool. Does not explicitly list alternatives or when not to use, but the recommendation effectively guides agent behavior.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_billingAInspect
Get billing/pricing capability flags for the current company. Use before invoking paid-only API behaviors.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry burden. It correctly implies a read-only operation ('Get'), but does not explicitly state it has no side effects or list any required permissions. However, the context of checking flags before paid actions suggests safety, earning a 4.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states what it does, second explains when to use it. No wasted words, front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, and description does not detail return format (e.g., type of flags). However, given zero parameters and low complexity, the description suffices for an agent to understand the tool's role and use it effectively. Slight lack of return detail prevents a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters with 100% schema_description_coverage, so baseline is 4. Description adds no parameter details beyond schema, which is acceptable given no parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves billing/pricing capability flags for the current company. The verb 'Get' and resource 'billing/pricing capability flags' are specific. Among a large set of sibling tools, this is the only billing-related one, so distinction is clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to use before invoking paid-only API behaviors, providing clear when-to-use context and implicitly stating when not needed (if no paid actions planned).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_candidateAInspect
Get full candidate data including application summaries by candidate ID or alias.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits like authentication needs or rate limits, but it only states the return value. It correctly implies a read operation without side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads key information (verb, resource, scope, identification method). No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter tool and no output schema, the description adequately covers the purpose and identification method. It could mention limitations like data freshness, but is sufficient for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the parameter 'id' is fully described. The description echoes the schema's note about accepting ID or alias, adding no new semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get', the resource 'full candidate data', and includes 'application summaries' as part of the output. It distinguishes from sibling tools like hires_get_application and hires_get_job.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for retrieving candidate data but does not provide explicit 'when to use' or alternatives. For example, it does not suggest using hires_get_application if only application details are needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_candidate_resumeAInspect
Get the primary resume for a candidate. Returns uuid, absolute download url (use Bearer auth), relative_time, file metadata, type. Use include='text_content' to also get the parsed plain-text content in a text field without downloading the file.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). | |
| include | No | Comma-separated optional fields. Use 'text_content' to add a `text` field with parsed plain-text resume content. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the return fields, mentions Bearer auth for the download URL, and explains the include parameter. However, it does not cover error handling or prerequisites.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: the first states the main purpose, the second adds detail about the include parameter. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return fields and the optional text_content. It lacks details on error cases or missing resumes, but is adequate for a simple retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by explaining the return format and how the include parameter works. It also clarifies the id parameter's format, which matches the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets the primary resume for a candidate and lists the returned fields. This differentiates it from sibling tools like hires_get_candidate or hires_download_attachment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool returns and how to use the include parameter, but does not explicitly state when to use this tool versus alternatives like hires_download_attachment or hires_get_candidate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_career_jobAInspect
Get full details of a single public job by ID. Returns salary, education level, experience level, and other extended fields. Returns 404 for draft, archived, or internal jobs.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID | |
| company_slug | Yes | Company slug identifying the career site |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of disclosure. It reveals the 404 behavior for certain job states, but does not mention authentication, rate limits, or what happens if the job does not exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, zero waste. The second sentence adds valuable detail about return fields and error cases without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description lists key return fields (salary, education level, experience level) and error behavior. For a simple get-by-id tool, this is fairly complete, though it could mention if authentication is required or if the response includes all job fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and both parameters (id, company_slug) are described clearly in the schema. The description adds no additional meaning beyond 'by ID' which is already implicit in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Get full details of a single public job by ID' specifying the verb, resource, and identifier. It also distinguishes from siblings by noting it returns 404 for non-public jobs, implying it's for public jobs only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (for public jobs) by stating it returns 404 for draft, archived, or internal jobs. However, it does not explicitly name alternatives like hires_get_job which may handle all job types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_companyAInspect
Get company profile and owner metadata. Use before updates or ownership-sensitive actions.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Company ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. While it implies a read-only operation, it does not explicitly state idempotency, safety, or error behavior. The lack of details leaves some ambiguity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—two sentences that front-load the purpose and immediately follow with usage guidance. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, no output schema), the description is fairly complete. It explains what is retrieved and when to use it. A slightly more detailed note on return structure would improve completeness, but it's already sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the only parameter 'id' is described as 'Company ID' in the schema. The description does not add additional semantic meaning beyond what the schema already provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get company profile and owner metadata.' It uses a specific verb and resource, and distinguishes itself from sibling tools like hires_update_company and hires_delete_company.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage context: 'Use before updates or ownership-sensitive actions.' This guides the agent on when to invoke this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_email_templateAInspect
Get full details of a specific email template by ID, including subject and body content.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Email template ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description implicitly indicates a read operation via 'Get'. Lacks explicit statement about no side effects or permissions. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action and resource, includes key return attributes. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (1 param, no output schema), description adequately covers purpose and returns. Could mention return format but not required. Complete enough for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers the single parameter with 100% coverage; description adds no extra meaning beyond 'by ID'. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states action (Get), resource (email template), and what details are returned (subject and body content). Distinguishes from list and create siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage when you have a template ID and want full details, but does not explicitly contrast with list or other get tools. No when-not or alternatives mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_evaluationBInspect
Get a filled evaluation form with all answers. Returns evaluator info, summary score, summary text, and individual question answers. Use for detailed review of evaluator feedback on a candidate application.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Evaluation form ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose any behavioral traits like side effects, permissions, or limitations. It simply describes what is returned without adding context beyond the schema, leaving the agent uninformed about access controls or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two short, front-loaded sentences. The first states the purpose, and the second details returns and use case. No unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with one parameter and no output schema, the description covers the main aspects: action, resource, return fields, and a use case. It does not mention how the ID relates to an application, but the use case implies it. Slightly lacking in linking to broader workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the single parameter 'id' is described as 'Evaluation form ID'). The tool description does not add extra meaning beyond the schema; it neither explains how to obtain the ID nor provides additional context. Baseline 3 due to high coverage, no added value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'filled evaluation form with all answers'. It specifies return fields (evaluator info, summary, answers) and a use case ('detailed review'). However, it does not explicitly differentiate from sibling tools like hires_list_application_evaluations, which likely lists evaluation forms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description suggests using the tool for 'detailed review of evaluator feedback', providing context. However, it lacks guidance on when not to use it or mention of alternatives, such as listing evaluations first to obtain the ID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_formBInspect
Get form details including all questions with their statuses.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Form ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. It only states what the tool does (get details) but does not explicitly confirm it is read-only, describe error behavior for missing IDs, or mention any side effects. The name implies read-only, but additional context would be helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of 9 words that efficiently conveys the core functionality. It is concise but could include a slight hint about usage (e.g., 'Use the ID from list_forms'). Still, it avoids verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter and no output schema or annotations, the description is minimally adequate. It mentions the inclusion of questions and statuses, but does not clarify the full return structure (e.g., form fields) or error handling. It covers the essential elements but leaves some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (single parameter 'id' described as 'Form ID'). The description adds no additional meaning beyond the schema; it does not explain where to find the ID or its format. With high schema coverage, a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'get' and the resource 'form details', specifying that it includes 'all questions with their statuses'. This distinguishes it from sibling tools like 'hires_list_forms' (which likely returns a summary list) and 'hires_get_question' (which retrieves a single question).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives such as 'hires_list_forms' or 'hires_get_question'. It does not mention prerequisites or context like obtaining the form ID from a list or that this tool is for detailed retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_interviewAInspect
Get full details of a specific interview by ID. Use include to embed related candidate, application, or job data.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Interview ID | |
| include | No | Comma-separated related resources to embed: candidate, application, job |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It describes the operation as getting details, but does not disclose behavioral traits like authentication requirements, rate limits, or that it is read-only. Minimal additional value beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the main action and efficient use of words. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple get tool with two parameters and no output schema, the description covers the essential: what it does and how to use the optional embedding. Adequate for basic usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds marginal value by restating the embedding hint for `include`, but does not provide new information beyond the schema's own parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get full details of a specific interview by ID', providing a specific verb and resource. It also mentions the optional `include` parameter to embed related data, distinguishing it from list tools like `hires_list_interviews`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the `include` parameter usage but lacks explicit guidance on when to use this tool versus alternatives, such as when to use `hires_list_interviews` for a list or `hires_get_candidate` for candidate details.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_jobAInspect
Get full details of a job by ID or alias. Use include to load related workflow, hiring team, or pipeline stages data.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias | |
| include | No | Comma-separated related resources to embed: workflow, hiring_team, pipeline_stages |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry full burden. It only states 'Get full details,' implying a read-only operation, but does not disclose permissions, error handling, side effects, or what 'full details' entails. This is minimal for a tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: the first states the core purpose, the second provides a usage tip for the include parameter. No unnecessary words or repetition. Highly efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool without output schema or annotations, the description is adequate but incomplete. It omits what fields are returned, error conditions (e.g., job not found), and any prerequisites. Given the tool's simplicity, it covers the basics but leaves gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds context for 'include' by explaining it loads related data (workflow, hiring team, pipeline stages), which the schema already lists. For 'id,' the description repeats schema info. Overall, the description adds some but limited additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full details of a job by ID or alias, using the verb 'Get' and specifying the resource. It differentiates from siblings like hires_list_jobs by targeting a single job and mentions the optional include parameter for related data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when a specific job's details are needed, but it does not explicitly compare to alternatives like hires_list_jobs or hires_get_career_job. No when-not-to-use guidance is provided, but the context is clear for a simple retrieval tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_messageAInspect
Get a scheduled message by ID. Returns scheduler-backed message details including sender account, schedule timestamps, and cancelability.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Message ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool returns 'scheduler-backed message details including sender account, schedule timestamps, and cancelability', implying a read-only operation. However, it does not disclose authentication requirements, rate limits, or access constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are front-loaded with the core action and resource. Every word adds value; no fluff or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers the purpose and key details of the response. It could mention error behavior (e.g., if ID not found), but the provided information is largely sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one required parameter 'id' with description 'Message ID.' Schema coverage is 100%, so baseline is 3. The description adds no additional meaning beyond 'by ID', so it does not improve parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get a scheduled message by ID', specifying the exact verb and resource. It distinguishes from sibling tools like 'hires_get_notification_message' by specifying 'scheduled message', and from 'hires_list_messages' which lists multiple messages.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for retrieving a single scheduled message by ID, but does not explicitly provide guidance on when to use this tool versus alternatives like 'hires_list_messages' or 'hires_get_notification_message'. No exclusions or conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_noteAInspect
Get a single note with author and visibility metadata. Use include=user to load author details.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Note ID | |
| include | No | Include related resources, e.g. 'user' for author details |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description adds context about returned metadata and the include parameter but does not cover error behavior, authentication, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no redundancy. First sentence states the core purpose and second provides a parameter hint. Perfectly front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple get tool with 2 parameters and no output schema, the description is fairly complete. It explains what the response includes and how to get author details, though response structure is not detailed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% but the description enhances the 'include' parameter with a concrete usage example ('Use include=user to load author details'), adding practical value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get a single note' with specific metadata (author and visibility). It distinguishes from sibling tools like 'hires_list_notes' and 'hires_create_note'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching a single note but does not explicitly contrast with alternatives like 'hires_list_notes' or mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_notification_messageAInspect
Get a notification email message (e.g. rejection email) by ID. Returns subject, body, sender, recipient, and schedule metadata. Use candidate messages list to discover notification message IDs.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Notification email message ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must bear the full burden. It lists the return fields but does not disclose prerequisites beyond the ID, permissions, rate limits, or error conditions. For a simple get operation, this is minimally adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loading the core purpose and return fields, then adding a practical usage hint. Every sentence adds value with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
There is no output schema, so the description adequately explains return fields. It lacks details on error cases or prerequisites beyond the ID, but for a straightforward retrieval tool, it covers the essential context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of the single parameter with a description matching the tool's purpose. The description adds no extra meaning beyond what the schema already provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a notification email message by ID, listing the returned fields (subject, body, sender, recipient, schedule metadata). It distinguishes from sibling get tools like hires_get_message by specifying the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a practical hint ('Use candidate messages list to discover notification message IDs'), indicating how to obtain the required ID. However, it does not explicitly state when to use this tool over alternatives or provide exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_nurture_campaignAInspect
Get a single nurture campaign by ID with all steps and configuration details.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Nurture campaign ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the behavioral disclosure burden. The verb 'Get' implies a read operation, but the description does not explicitly state idempotency, lack of side effects, or any restrictions. It could be more explicit about the read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the tool's purpose and output. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple get operation with one parameter and no output schema, the description adequately explains what is returned (all steps and configuration details). It is complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'id' has a clear description in the schema ('Nurture campaign ID'). The description adds no additional meaning beyond what the schema provides, and schema coverage is 100%, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a single nurture campaign by ID, including all steps and configuration details. It distinguishes itself from list and write operations among siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a specific campaign ID is known and full details are needed. It does not explicitly state when not to use it or name alternatives, but context is clear from the sibling names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_questionAInspect
Get a question definition including type and options by ID.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Question ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It implies a read-only operation via 'Get', but does not explicitly state that it does not modify data or disclose any authorization needs, rate limits, or error behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the purpose and contains no extraneous words. Every word contributes to understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description adequately conveys the primary function and key return fields. However, it could mention additional fields returned or behavior for invalid IDs to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds minimal value beyond the schema by mentioning 'including type and options' for the return, but does not add syntax or constraints for the id parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get', the resource 'question definition', and specifies it retrieves 'type and options by ID'. This distinguishes it from sibling tools like list, create, update, and delete.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool or when to consider alternatives like hires_list_questions. No exclusions or context are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_userAInspect
Get a single user by ID within current tenant scope. Use for identity resolution in automation flows. The default_mail_account_id field can be used as from_account_id when sending emails.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | User ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It adds value by noting that 'default_mail_account_id' can be used as 'from_account_id' when sending emails, but it does not disclose other behaviors such as authentication requirements, error handling (e.g., user not found), or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences. The first sentence clearly states the purpose, and the second provides a practical detail. No redundant or irrelevant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with one parameter and no output schema. The description covers the purpose and scope, plus a useful hint about a field. However, it could be more complete by briefly describing the return value or handling of missing users.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a single parameter 'id' described as 'User ID'. The description does not add any additional meaning beyond the schema, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get a single user by ID within current tenant scope', specifying the verb, resource, and scope. This distinguishes it from the sibling tool 'hires_list_users' which retrieves multiple users.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a concrete use case: 'Use for identity resolution in automation flows.' While it does not explicitly exclude alternative tools, the context implies that 'hires_get_user' is for individual lookups, contrasting with 'hires_list_users' for bulk retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_get_workflow_stagesAInspect
Get stages for a specific workflow by ID. Equivalent to hires_list_workflow_stages with workflow_id filter.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Workflow ID | |
| company_id | No | Target company ID (uses default company when omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full responsibility for behavioral disclosure. It only describes the basic action but omits any details about side effects, permissions required, error conditions, or response structure. For a read operation, this lack of transparency is a notable gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two short sentences with no unnecessary words. Every sentence adds value: the first states the purpose, the second provides context about the sibling tool. It is appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema, no annotations), the description captures the essential purpose and relationship to a sibling. However, it could be improved by mentioning what the response contains (e.g., a list of stage objects). It is mostly complete for a straightforward get operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds minimal value beyond schema: it ties the 'id' parameter to a workflow ID and mentions the relationship to the sibling tool. No additional parameter details are provided, so the score remains at the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get stages for a specific workflow by ID', which uses a specific verb and resource. It also distinguishes itself from the sibling tool hires_list_workflow_stages by noting equivalence with a filter, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions equivalence to hires_list_workflow_stages with a workflow_id filter, implying this tool is for fetching stages by a known workflow ID. While it doesn't state when not to use it or list alternatives explicitly, the context is clear for an agent familiar with the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_hire_applicationAInspect
Mark an application as hired. This is the finalization step in a hiring workflow. The application status changes to 'hired' and hired_at is set.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. | |
| include | No | Comma-separated relations to embed: candidate, cv.text. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description reveals that the application status changes to 'hired' and 'hired_at' is set, which is the primary behavioral effect. No annotations exist to provide additional safety or permission cues, so the description carries the burden of transparency. It could have mentioned potential side effects (e.g., notifications) but overall is clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exceptionally concise: two sentences that front-load the purpose and follow with the key effect. Every sentence provides essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description adequately explains the tool's effect on the application record. However, it does not mention return values, error conditions, or prerequisites (e.g., whether the application must be in a specific state). It is mostly complete for a mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for its two parameters (id and include), with clear schema descriptions. The description adds no further meaning to these parameters beyond what the schema already provides, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Mark an application as hired'), the resource ('application'), and the context ('finalization step in a hiring workflow'), effectively distinguishing it from siblings like hires_reject_application or hires_advance_application.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies that this tool is used at the final stage of hiring ('finalization step') but does not explicitly state when to use it versus alternatives like hiring a candidate or rejecting. No explicit when-not-to-use or alternative tool names are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_application_attachmentsBInspect
List all file attachments linked to an application (resumes, cover letters, documents). Returns file metadata and download URLs.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description must bear full burden of behavioral disclosure. While it states the tool lists 'all' attachments, it omits details like pagination, ordering, or side effects (e.g., read-only nature). The lack of explicit read-only hint or authentication requirements leaves ambiguity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no unnecessary words. It front-loads the action ('List all file attachments') and includes examples of attachment types and return information. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool has low complexity (one parameter), the description lacks information about pagination, limits, or error handling. No output schema exists, so more detail on return structure would be beneficial. Adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for the single parameter 'id' (Application ID). The description adds context by explaining the parameter's role but does not provide additional semantic value beyond the schema, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists file attachments for an application, specifies types (resumes, cover letters, documents), and indicates it returns metadata and download URLs. This distinguishes it from siblings like hires_download_attachment (download a single attachment) or hires_upload_application_attachment (upload).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide guidance on when to use this tool versus alternatives such as hires_list_candidate_files (which lists candidate files) or hires_get_application (which gets application details). No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_application_evaluationsAInspect
List all filled evaluation forms for an application. Each evaluation includes the evaluator, summary score (strong-yes to strong-no), and summary text.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
As a list endpoint, it is likely read-only, but without annotations, the description carries full burden. It does not explicitly state safety or side effects, though the description accurately reflects the operation. A more explicit declaration of read-only nature would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, consisting of two sentences that convey the purpose and output without any unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description helpfully lists the fields included in each evaluation. However, it could mention pagination or ordering. Overall, it is fairly complete for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already documents the single parameter 'id' with a description of 'Application ID.'. The description does not add extra meaning or constraints beyond that, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action 'List' and the resource 'all filled evaluation forms for an application', with details about what each evaluation includes. This distinguishes it from sibling tools like 'hires_get_evaluation' and 'hires_list_forms'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the context implies usage for listing evaluations per application via the 'id' parameter, there is no explicit guidance on when to use this tool versus alternatives like 'hires_get_evaluation'. The usage is implied but not clearly delineated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_applicationsAInspect
List applications across all accessible jobs. Supports filtering by candidate, job, stage, status, AI score range, and date ranges. Use for pipeline analytics, sync jobs, and ATS dashboards.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1). | |
| size | No | Items per page (default 25, max 100). | |
| sort | No | Sort order. Prefix with - for descending. Default: -created_at. | |
| job_id | No | Filter applications by job ID. | |
| status | No | Filter by application status: pending (active), hired, or rejected. | |
| include | No | Comma-separated relations to embed: candidate, cv.text. Example: candidate,cv.text | |
| stage_id | No | Filter applications by pipeline stage ID. Best used together with job_id. | |
| company_id | No | Filter by company ID. Omit for all accessible companies. | |
| ai_score_max | No | Return only applications with ai_score <= this value. | |
| ai_score_min | No | Return only applications with ai_score >= this value. | |
| candidate_id | No | Filter applications by candidate ID. | |
| created_after | No | Return only applications created after this Unix timestamp (seconds). | |
| updated_after | No | Return only applications updated after this Unix timestamp (seconds). Use for incremental sync. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the tool as listing applications and lists filters, but does not disclose behavioral traits such as read-only nature, rate limits, data freshness, or pagination defaults beyond schema. This is a significant gap for a list tool with many parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: three sentences. The first sentence states the core purpose, the second lists filters, and the third gives use cases. Every sentence adds value, and the structure is front-loaded and easy to scan.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (13 parameters, no output schema, no annotations), the description is fairly complete. It covers purpose, filters, and use cases. It does not discuss output format or limitations beyond schema, but for a list tool with good schema coverage, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents each parameter. The description adds value by grouping filter categories (candidate, job, stage, etc.) and mentioning use cases, but does not explain parameter semantics beyond what is in the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List'), resource ('applications'), and scope ('across all accessible jobs'). It also lists supported filters and specific use cases, distinguishing it from siblings like hires_get_application (single application) and hires_create_application (create).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Use for pipeline analytics, sync jobs, and ATS dashboards.' It does not explicitly state when not to use it or mention alternatives, but the context of sibling tools provides that information. The guidance is clear and actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_boardsBInspect
List available publishing boards with metadata. Use for distribution setup and board selection.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description only implies a read operation without detailing metadata content, access requirements, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no redundancy, clearly stating purpose and a usage hint.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a parameterless list tool, but lacks output format details and fails to clarify the difference between 'boards' and 'job boards' among siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, and the description appropriately omits parameter details; baseline 4 applies as per guidelines.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists publishing boards with metadata, but fails to distinguish from the similar sibling `hires_list_job_boards`, which could cause confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes use for distribution setup and board selection, implying context, but no mention of when not to use or alternative tools like `hires_list_job_boards`.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_candidate_activitiesAInspect
List timeline activities for a candidate (comments, stage moves, AI responses, etc.). Supports filtering by event type.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). | |
| page | No | Page number (1-based). | |
| event_type | No | Comma-separated event types to filter. Supported: comment, copilot_response, stage_moved, automation_action_triggered, assign_job, enrichment, call, validate_emails, profile_mutation, qualification, assign_tags, assign_sources, candidate_rate. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full burden, but it only states it lists activities and supports filtering. It doesn't disclose pagination behavior, rate limits, or that it's a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the purpose, and no unnecessary words. Every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with pagination and filtering, the description is sufficiently complete without an output schema. It covers the key functionality and parameter details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage, but the description adds value by listing example event types and clarifying that 'id' can be integer or string. It also explicitly mentions comma-separated event types, which the schema does not.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists timeline activities for a candidate, with specific examples (comments, stage moves, AI responses) and mentions filtering by event type, distinguishing it from sibling list tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives, such as when not to use it or which sibling tools might replace it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_candidate_filesAInspect
List all files attached to a candidate (resumes and other documents). Each entry has uuid, absolute download url (use Bearer auth), relative_time, file metadata (orig_file_name, file_ext, file_type/MIME, readable_size), and type (resume/other).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description fully carries the burden. It explicitly notes this is a read operation ('List'), discloses the need for Bearer authentication for download URLs, and enumerates the exact fields returned (uuid, download url, relative_time, metadata, type). This is comprehensive for a list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that concisely conveys the action and output. It is front-loaded with the primary purpose. While it packs a lot of detail, it remains readable and each piece of information is valuable. Minor improvement could be breaking into two sentences for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list operation with no output schema, the description sufficiently explains the return structure (fields, auth). However, it does not mention pagination, sorting, or rate limits. Given the tool lists files for a single candidate, pagination may be less critical, but noting any maximum results would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one required parameter 'id' with description 'Candidate ID (integer) or alias (string).' Since schema description coverage is 100%, the baseline is 3. The description does not add any further semantics or examples beyond what the schema already provides, so it meets the baseline without improvement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List all files attached to a candidate', specifying the verb and resource. It differentiates from siblings like hires_get_candidate (gets candidate info), hires_upload_candidate_file (uploads), and hires_download_attachment (downloads specific file). It also details the content of each entry, making its purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is used to list candidate files, but does not explicitly state when to use it versus alternatives like hires_get_candidate (for broader candidate info) or hires_download_attachment (for downloading a single file). There is no guidance on prerequisites or exclusions, relying on context from sibling tool names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_candidate_interviewsAInspect
List all interviews for a candidate across all applications. Useful for timeline views and scheduling conflict detection.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). | |
| page | No | Page number (1-based). | |
| size | No | Number of items per page. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not detail pagination behavior, sorting, or scope beyond 'across all applications'. However, it correctly implies a safe read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two focused sentences, front-loading the purpose and usage context without any wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with three self-explanatory parameters, the description provides sufficient context, including use cases, without needing to elaborate on return values (no output schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All parameters are described in the schema (100% coverage), and the description adds contextual value by clarifying that interviews are across all applications, which goes beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (list interviews) and the resource (for a candidate across all applications), which distinguishes it from the generic sibling tool 'hires_list_interviews'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context for when to use the tool ('timeline views and scheduling conflict detection') but does not explicitly exclude alternatives or state when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_candidate_messagesAInspect
List email and messaging history for a candidate. Use is_scheduled=1 to filter only pending scheduled messages.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). | |
| page | No | Page number (1-based). | |
| size | No | Number of items per page. | |
| is_scheduled | No | Set to 1 to return only scheduled (not yet sent) messages. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It only states the listing action, but does not disclose read-only nature, authentication needs, rate limits, or any behavioral constraints. The description is minimal and lacking in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two short sentences: the first states the purpose clearly, and the second provides a filtering tip. It is front-loaded and contains no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
There is no output schema, and the description does not explain return values or pagination behavior. For a list tool, it should mention what the response contains (e.g., message objects) or note that it supports pagination via page/size parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description mentions is_scheduled=1, which is already documented in the schema. It does not add additional meaning or examples beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List email and messaging history for a candidate', specifying the action (list) and the resource (email and messaging history). This distinguishes it from sibling list tools like hires_list_messages and hires_list_candidate_activities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a specific usage tip for filtering scheduled messages with is_scheduled=1, which is helpful. However, it does not explicitly contrast with alternatives like hires_list_messages or hires_send_candidate_message, so the guidance is clear but not fully comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_candidatesAInspect
List candidates with optional filters. Supports filtering by job, stage, email, name, LinkedIn, and date ranges. Returns paginated results.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Plain-text search by name or email. Supports partial matches. | |
| page | No | Page number (1-based). | |
| size | No | Number of items per page. | |
| No | Exact candidate email filter. | ||
| job_id | No | Filter candidates by job ID. | |
| include | No | Comma-separated list of optional related data to include in the response. | |
| No | Search by LinkedIn profile URL or alias (e.g. 'johndoe' or full URL). | ||
| stage_id | No | Filter candidates by pipeline stage ID. Best used together with job_id. | |
| full_name | No | Candidate full-name filter. | |
| company_id | No | Filter by company ID. Required only when the API key has access to multiple companies. | |
| created_after | No | Return only candidates created after this Unix timestamp (seconds). | |
| updated_after | No | Return only candidates updated after this Unix timestamp (seconds). Useful for incremental sync. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions pagination behavior but does not disclose read-only nature, authentication requirements, rate limits, or side effects. 'List' implies a read operation, but not explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no redundant information. Every sentence adds value: purpose, filters, pagination.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 12 parameters and no output schema, the description covers the key behaviors: available filters and pagination. It does not describe the response structure (e.g., candidate object fields), but for a typical list endpoint this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description summarizes filter categories (job, stage, email, name, LinkedIn, date ranges) but adds minimal meaning beyond the already detailed schema property descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List candidates') and resource, and enumerates specific filters (job, stage, email, name, LinkedIn, date ranges) and pagination. It readily distinguishes from sibling tools like hires_create_candidate or hires_get_candidate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists filters but does not provide explicit guidance on when to use this tool vs alternatives (e.g., hires_list_applications, hires_get_candidate). It implies usage for listing candidates with optional conditions, but lacks when-not or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_candidate_tagsAInspect
List all tags assigned to a candidate. Useful for segmentation and audience-based automations.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden of behavioral disclosure. It implies a read-only operation ('list all tags') but does not address permissions, rate limits, or potential side effects. For a simple list tool, this is adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-loading the action and providing usage context. Every sentence earns its place with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, read-only), the description is minimally complete. However, it does not mention pagination, return format, or whether tags are returned as IDs/labels/objects. For a list tool, these details would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'id', with a clear description of its type and format. The tool's description adds no additional meaning beyond what the schema already provides, so baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List all tags assigned to a candidate,' specifying the verb (list), resource (tags), and scope (assigned to a candidate). This distinguishes it from sibling tool hires_list_tags, which lists all tags globally.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a usage context ('Useful for segmentation and audience-based automations') but does not explicitly state when to use this tool versus alternatives like hires_list_tags (global tags list) or other tag manipulation tools. No exclusions or alternative tool names are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_career_jobsAInspect
List publicly visible jobs for a company career site. Supports filtering by department, employment type, city, and country. Use to power a custom careers page.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | Filter by job city (exact match) | |
| page | No | Page number (default 1) | |
| size | No | Page size (default 25) | |
| country | No | Filter by job country (exact match) | |
| company_slug | Yes | Company slug identifying the career site | |
| department_id | No | Filter by department ID | |
| employment_type_id | No | Filter by employment type ID (e.g. Full-time, Part-time) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description implies read-only behavior by stating 'List publicly visible jobs'. Does not detail response format, pagination details, or authentication needs, but is acceptable for a list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first defines purpose, second specifies filters and use case. No redundant information, front-loaded with key purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description covers purpose and filtering but omits response structure, default pagination behavior, and any required authentication. Given no output schema and moderate complexity (7 params), some additional context would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so schema already documents all parameters. Description lists filtering options but adds no new details beyond what schema provides. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool lists publicly visible jobs for a company career site, with specific filtering options. It distinguishes from siblings like hires_get_career_job and hires_list_jobs by targeting external career site use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions 'Use to power a custom careers page', providing a clear use case. Does not explicitly state when not to use or compare to alternatives, but context is adequate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_categoriesAInspect
List global job categories. Use for job classification and consistent taxonomy mapping.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of behavioral disclosure. 'List' implies a read-only operation, which is appropriate, but no additional behavioral traits (e.g., pagination, rate limits) are mentioned. The description does not contradict any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exceptionally concise: two sentences that front-load the key information. Every word serves a purpose, with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description is complete enough. It explains what the tool does and why it's used. A slightly more detailed note about the return format would elevate it to 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema coverage is trivially 100%. Per the guidelines, baseline for 0 parameters is 4. The description adds no parameter info but does not need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('list') and resource ('global job categories'), and explains its use for 'job classification and consistent taxonomy mapping.' This distinguishes it from sibling list tools by specifying the unique resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear use case ('job classification and consistent taxonomy mapping') but does not explicitly mention when not to use it or compare it to alternative tools. The context of many list siblings makes the guidance adequate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_companiesAInspect
List partner-accessible companies with pagination. Use for tenant discovery and management panels.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| size | No | Page size (default 25) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, but the description mentions pagination behavior implicitly through parameters. However, it does not disclose non-obvious traits like authentication requirements, rate limits, or response size limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two short sentences) and front-loaded with the primary action and context. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description lacks details on the response structure (e.g., list format, count) and any ordering or filtering behavior. It covers the basics but not all potential agent needs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Both parameters are fully described in the schema (100% coverage). The description adds 'with pagination' as context, but does not provide new meaning beyond what the schema already conveys.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool lists partner-accessible companies with pagination, and distinguishes from siblings like hires_list_candidates or hires_list_jobs by specifying resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific use cases ('tenant discovery and management panels'), which gives context, but does not explicitly mention alternatives or cases where this tool should not be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_company_id_mail_accountsAInspect
List all mail accounts for all users in a specific company. The company must be accessible (own company or a client).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Company ID | |
| page | No | Page number (default 1) | |
| size | No | Page size (default 25) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description falls short on disclosing behavioral details beyond listing. It implies a read operation but omits information about permissions, rate limits, or what happens on failure. The accessibility constraint is noted but not expanded.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no fluff. Every sentence adds value: purpose and constraint. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should explain the return format or pagination behavior. It does not mention output structure or that pagination parameters are supported, leaving the agent uncertain about the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents the parameters. The description adds no additional meaning or usage hints for the parameters beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action is listing all mail accounts for all users in a specific company, with a specific verb and resource. It distinguishes from the sibling 'hires_list_user_mail_accounts' (for a single user) and implies the scope via company ID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a condition (company must be accessible) but does not provide explicit when-to-use vs alternatives or when-not-to-use. The condition is helpful for context but not a complete guideline.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_company_mail_accountsAInspect
List all mail accounts for all users in the current company. Use to resolve from_account_id before creating scheduled emails.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| size | No | Page size (default 25) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility. It does not disclose pagination behavior (despite page/size parameters), authentication requirements, rate limits, or the structure of returned data. For a read operation, more detail is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. The first states the primary function, the second gives a practical use case. Highly concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 optional params, no output schema), the description is mostly complete for basic usage. It could benefit from mentioning pagination or output details, but it adequately covers the core functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for page and size. The description adds no additional parameter semantics beyond the use case context, which does not directly enhance understanding of the parameters themselves.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all mail accounts for all users in the current company, with a specific use case (resolving from_account_id). It distinguishes itself from sibling tools like hires_list_user_mail_accounts by targeting company-wide accounts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context ('Use to resolve from_account_id before creating scheduled emails'), but does not explicitly exclude alternatives or mention when not to use it. It gives a direct use case but lacks contrast with similar siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_departmentsAInspect
List departments for the company. Use for job organization filters and reporting dimensions.
| Name | Required | Description | Default |
|---|---|---|---|
| company_id | No | Target company ID (uses default company when omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the burden of behavioral disclosure. It only mentions 'list' which implies reading, but does not describe any side effects, authentication needs, rate limits, or return behavior. Minimal transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, 12 words total, with no extraneous information. Every word adds value: it names the action, resource, and use case.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one optional parameter and no output schema, the description provides sufficient context: what it lists and a typical use case. It could optionally mention the return format (e.g., department names/IDs), but the current text is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (one parameter 'company_id' with a clear description). The tool description adds no additional parameter meaning beyond the schema. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'List departments' and the resource 'for the company'. It also provides usage context: 'Use for job organization filters and reporting dimensions.' This distinguishes it from sibling list tools (e.g., hires_list_candidates, hires_list_jobs) as it specifically targets departments, a unique entity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description hints at when to use (for filters and reporting) but does not explicitly state when not to use or mention alternatives. For a simple list tool, this is adequate but lacks exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_education_levelsAInspect
List education level taxonomy values. Useful for job requirements and structured matching.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool lists values but does not detail pagination, ordering, or other behavioral aspects. For a simple list operation, this is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short sentences that deliver the key purpose and use case. There is no unnecessary information, and the content is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description is complete enough. It explains what the tool lists and its relevance. It could mention if the list is exhaustive or filterable, but it's sufficient for a simple taxonomy list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters (0 params, 100% coverage). The description adds no extra parameter information, but since there are none, the baseline score of 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'education level taxonomy values', providing a specific purpose. It distinguishes itself from generic list tools but does not explicitly differentiate from similar taxonomy list siblings like hires_list_categories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes 'Useful for job requirements and structured matching', which implies when to use the tool. However, it does not provide explicit guidance on when not to use it or mention alternatives, leaving room for ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_email_templatesAInspect
List email templates for the target company. Returns paginated results with template name, subject, and body.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| size | No | Page size (default 25) | |
| company_id | No | Target company ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It correctly indicates a read-only list operation with pagination, but does not disclose authorization needs, rate limits, or the behavior when 'company_id' is omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences. Each sentence adds essential information: the action and input scope, and the output characteristics.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the return values and pagination. However, it omits details about sorting, filtering, or the optionality of 'company_id', and does not clarify whether the tool applies any default company context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds value by specifying the output fields (template name, subject, body), which are not in the input schema, helping the agent understand what data the tool returns.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List email templates for the target company' with a specific verb and resource. It differentiates from siblings like 'hires_get_email_template' (singular) and 'hires_create/update/delete_email_template' but does not explicitly call out these distinctions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving a list of email templates but provides no explicit guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_employment_typesAInspect
List supported employment types (full-time, part-time, contract, etc.). Use for validation and normalization.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. The description says 'List,' which implies a read-only operation, but it does not explicitly state that it is side-effect free or require authentication. For a simple list tool, this is minimal but adequate; however, more transparency would be beneficial.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the action and then provides context. Every word earns its place; it is concise and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (no parameters, no output schema, no annotations), the description is largely complete. It explains what the tool lists and why. However, it could mention the return format (e.g., list of strings) for full completeness, but it is not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so the description does not need to add meaning beyond the schema. The baseline for 0 parameters is 4. The description adds context about usage (validation/normalization), which is helpful.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and the resource 'supported employment types', with examples in parentheses. It distinguishes from sibling tools by being specific to employment types, and there are no other tools with overlapping purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear use case: 'Use for validation and normalization.' This implies when to use the tool, though it does not explicitly mention when not to use it or compare to alternatives. Given the uniqueness of the tool among siblings, this is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_experience_levelsBInspect
List experience level taxonomy values for role seniority modeling.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden for behavioral disclosure. It only states the action without revealing consequences, required permissions, or return characteristics. The agent cannot assess safety or side effects from this description alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, focused sentence that efficiently conveys the core purpose. It is front-loaded and avoids redundancy, though it could be marginally more informative without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should clarify return values (e.g., labels, IDs, structure). It does not, leaving the agent to infer the format. However, for a simple taxonomy list tool, the minimal description may be sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters with 100% schema coverage, so the baseline is 4. The description adds domain-specific context ('role seniority modeling') that provides meaning beyond the empty schema, helping the agent understand the taxonomic scope.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists experience level taxonomy values for role seniority modeling. It uses a specific verb ('list') and resource ('experience level taxonomy values'), distinguishing it from sibling tools like 'hires_list_education_levels' or 'hires_list_statuses'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. While the tool's purpose is straightforward, the lack of context (e.g., prerequisites, typical use cases) leaves the agent without direction for selection compared to other list tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_formsCInspect
List application forms (paginated). Returns forms with their questions for the target company.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number. | |
| size | No | Page size. | |
| company_id | No | Target company ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description only states it's paginated and returns forms with questions, but does not disclose read-only nature, rate limits, authentication needs, or any side effects. Minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, concise and front-loaded. No unnecessary words, though could be structured for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description partially explains what is returned (forms with questions). However, it does not clarify required parameters (company_id absence behavior), pagination defaults, or how this tool relates to similar list tools like list_questions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear parameter descriptions. The tool description adds that it returns forms with questions and targets a company, aligning with company_id. Value added beyond schema is modest.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it lists application forms with pagination and includes questions for a target company. The verb 'List' and resource 'application forms' are specific, distinguishing it from sibling list tools for other entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It does not mention any prerequisites, exclusions, or context like whether company_id is required, though the schema shows it as optional.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_hiring_teamAInspect
List users currently assigned to a job's hiring team. Useful for notification routing and collaboration tooling.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. The description only indicates a read operation ('List') but does not cover authentication requirements, error handling (e.g., missing job ID), rate limits, or that the tool returns a list. With zero annotation coverage, more detail is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no fluff. The first sentence defines the core purpose, and the second sentence adds utility context. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no nested objects), the description provides enough context to understand its purpose and use case. It could mention that it returns a list of user objects, but the omission is minor for a straightforward list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single parameter 'id', which already explains what it is (Job ID numeric or alias). The description adds no additional parameter information, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List users currently assigned to a job's hiring team.' It uses the specific verb 'List' and resource 'users on hiring team', effectively distinguishing it from related sibling tools like hires_add_hiring_team_member (which adds a member) and hires_list_users (which lists all users).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context with 'Useful for notification routing and collaboration tooling', implying when this tool is appropriate. However, it does not explicitly state when not to use it or mention alternatives, which would elevate it to a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_interviewsBInspect
List interviews with optional filters by job, application, candidate, interviewer, date, or timestamps for incremental sync. Returns paginated results.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | Filter by interview date (YYYY-MM-DD, UTC) | |
| page | No | Page number (default 1) | |
| size | No | Page size (default 20) | |
| job_id | No | Filter interviews by job ID | |
| include | No | Comma-separated related resources to embed: candidate, application, job | |
| candidate_id | No | Filter interviews by candidate ID | |
| created_after | No | Return only interviews created after this Unix timestamp (seconds) | |
| updated_after | No | Return only interviews updated after this Unix timestamp (seconds) | |
| application_id | No | Filter interviews by application ID | |
| interviewer_user_id | No | Filter interviews by interviewer user ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description partially carries the transparency burden. It mentions pagination but fails to state that the operation is read-only, requires no destructive side effects, or any authentication/rate-limit details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, focused sentence that front-loads the core purpose. It is concise with no extraneous words, though it could be slightly more structured (e.g., listing filters explicitly).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lacks key details: no output schema, no explanation of paginated result structure, and no context on incremental sync usage. For a tool with 10 parameters and many siblings, more completeness is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all parameters. The description adds no extra meaning beyond categorizing filters (e.g., 'for incremental sync' is vague). It does not explain the 'include' parameter or timestamp format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists interviews with optional filters, which is specific. However, it does not differentiate from sibling 'hires_list_candidate_interviews' or 'hires_get_interview', leaving ambiguity about when to use each.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'hires_list_candidate_interviews'. The description merely lists filters without explaining prerequisites or preferred use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_job_boardsAInspect
Get current board publication state for a specific job. Returns which job boards the job is published to. Useful for distribution dashboards and posting audits.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description bears full burden. It accurately describes the tool as a read operation returning board publication state without side effects. This is sufficient for a simple read tool, but could add more context about auth or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first defines action and scope, second states output and use case. No fluff, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one parameter and no output schema, the description fully covers what the tool returns (which job boards) and its purpose, leaving no obvious gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, with the parameter described as 'Job ID (numeric) or alias'. The description adds no extra meaning beyond what the schema provides, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Get current board publication state' and specifies the resource 'for a specific job'. It differentiates from sibling tools like hires_list_boards (which lists all boards) by focusing on a single job's publication state.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions it's useful for distribution dashboards and posting audits, implying a read-only audit context. However, it does not explicitly state when to use vs. alternatives (e.g., hires_batch_job_boards, hires_list_boards) or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_jobsAInspect
List jobs with optional filters by status, date range, department, or search query. Returns paginated results. Use for career-site sync, reporting, and external system indexing.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Search by job title or internal title (partial match) | |
| page | No | Page number (default 1) | |
| size | No | Page size (default 20) | |
| status | No | Filter by job status name (from GET /taxonomy/statuses, e.g. Public, Draft, Archived) | |
| include | No | Comma-separated related resources to embed: workflow, hiring_team, pipeline_stages | |
| company_id | No | Filter by company ID (required only for multi-company API keys) | |
| department_id | No | Filter jobs by department ID (from GET /taxonomy/departments) | |
| updated_after | No | Return only jobs updated after this Unix timestamp (seconds). Use for incremental sync. | |
| created_at_end | No | Return only jobs created at or before this Unix timestamp (seconds) | |
| created_at_start | No | Return only jobs created at or after this Unix timestamp (seconds) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description should fully disclose behavior. It mentions 'Returns paginated results' but does not specify idempotency, auth requirements, or that it is a read-only operation. The description is adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first describes functionality, second gives use cases. Every sentence adds value, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 10 parameters and no output schema, the description covers pagination and use cases but lacks detail about the response format, default behavior, or how it differs from similar list tools. Adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description does not need to add much. It summarizes the main filter types but omits parameters like 'include' for related resources. No additional meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List jobs' with optional filters and pagination, and lists use cases (career-site sync, reporting, external indexing). However, it does not distinguish from the sibling tool hires_list_career_jobs, which might serve a similar but different purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases for when to use the tool. However, it does not mention when not to use it or suggest alternatives like hires_get_job for single job details or hires_list_career_jobs for public career site listing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_job_webhooksAInspect
List webhooks configured for job-level events. Use to audit subscriptions and deployment state.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description implies a read-only list operation but does not explicitly state safety or side effects. Adequate for a simple read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and use case. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, description adequately covers the tool's function and use. Could mention return format, but minimal info is sufficient for this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already provides description for 'id' parameter (Job ID or alias). Description adds no extra meaning, but schema coverage is 100%, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists webhooks configured for job-level events, distinguishing it from 'hires_list_webhooks' which likely lists all webhooks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly suggests use for auditing subscriptions and deployment state, providing clear context. No explicit exclusions or alternatives mentioned, but usage is well implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_messagesAInspect
List messages sent or scheduled from a specific mail account. Returns outbound messages only (sent and scheduled), not received. Useful for monitoring cold outreach campaigns — check pending queue, delivery history, and plan next sends.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based). Default: 1. | |
| size | No | Number of items per page (1-100). Default: 20. | |
| status | No | Filter by message status: `scheduled` (pending send), `sent` (delivered), `all` (both). Default: `all`. | |
| date_to | No | End of period (unix timestamp, seconds). Filters on scheduled/sent time. | |
| date_from | No | Start of period (unix timestamp, seconds). Filters on scheduled/sent time. | |
| from_account_id | Yes | ID of the mail account (from `GET /companies/mail-accounts` or `GET /users/{user_id}/mail-accounts`). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It clarifies the tool returns only outbound messages, but lacks details on pagination behavior (though schema provides page/size), rate limits, or authentication requirements. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, and then a practical use case. Every sentence adds value without repetition or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should hint at return structure. It only says 'returns outbound messages' without detailing fields, pagination metadata, or error conditions. Sufficient for a simple list tool but leaves gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so each parameter already has a description. The tool description adds no extra meaning beyond the schema, meeting the baseline but not exceeding it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists outbound messages from a specific mail account, explicitly excluding received messages. This distinguishes it from siblings like hires_list_candidate_messages, providing precise scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives a concrete use case (monitoring cold outreach campaigns) and mentions checking pending queue and delivery history. However, it does not explicitly state when to avoid this tool or compare to alternatives, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_notesAInspect
List notes by candidate. Returns paginated discussion notes for a candidate. Use for shared recruiter context and timeline synchronization.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| size | No | Page size | |
| include | No | Include related resources, e.g. 'user' for author details | |
| candidate_id | Yes | Candidate ID (numeric) or alias |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions pagination but does not explicitly state it is read-only or safe. Could disclose that it only retrieves notes without modification, but it's implied by 'list'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-loading the core functionality. No filler; every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description covers purpose and pagination. Could mention that it is read-only, but given low complexity and sibling tools, it is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with descriptions for all 4 parameters. Description adds 'pagination' context but no added meaning beyond schema. Baseline score of 3 applies since schema is sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists notes by candidate with paginated results. Differentiates from sibling tools like hires_list_candidate_messages (messages) and hires_create_note (create). Uses specific verb 'List' and resource 'notes by candidate'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions use case: 'shared recruiter context and timeline synchronization'. Does not explicitly state when not to use or alternatives, but context is clear enough for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_nurture_campaignsAInspect
List nurture campaigns with pagination. Returns campaign summaries including steps.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| size | No | Page size (default 25) | |
| company_id | No | Target company ID (optional if API key is scoped to one company) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states pagination and return summaries, but does not explicitly declare read-only behavior or any side effects. For a list tool, this is adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two short sentences. The main purpose is front-loaded in the first sentence, and every word adds value. No unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
There is no output schema, so the description hints at return content ('campaign summaries including steps'), but lacks details on specific fields. Given the simplicity of the tool, this is somewhat incomplete but acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for all parameters (page, size, company_id). The description adds no extra meaning beyond the schema, so the baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List', the resource 'nurture campaigns', and key features like pagination and return content (campaign summaries including steps). It distinguishes itself from sibling tools like hires_get_nurture_campaign for single campaign retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing nurture campaigns but provides no explicit guidance on when to use this tool versus alternatives (e.g., get_nurture_campaign for a single campaign). No when-not-to-use or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_originsAInspect
List candidate origin taxonomy values. Use for attribution analytics and source normalization.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the burden of behavioral disclosure. It does not mention whether the operation is read-only, requires authentication, or has any side effects. The description is purely functional.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, immediately stating the action followed by use cases. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not specify the format of the returned taxonomy values. It is adequate for a simple list call but lacks details about output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters, and the schema coverage is 100%. The description adds no parameter semantics, but since 0 parameters earns a baseline of 4, this score is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists candidate origin taxonomy values, with a specific verb and resource. It distinguishes from sibling tools like hires_list_sources by specifying 'origin taxonomy' rather than generic sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear use case ('attribution analytics and source normalization') but does not explicitly state when not to use it or contrast with alternatives like hires_list_sources.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_questionsAInspect
List paginated question catalog for the company.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| size | No | Page size (default 25) | |
| company_id | No | Target company ID (uses default company when omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It implies a read operation via 'List' but does not explicitly state read-only or safety behavior. The description is minimal but not misleading.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action and resource, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 3 parameters and no output schema, the description is adequate for a list tool but lacks details on return format, sorting, or filtering beyond pagination. It could be more comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (all 3 parameters described). The description mentions 'paginated' and 'for the company', which adds minimal context beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List paginated question catalog for the company' clearly states the action (List), resource (question catalog), and key feature (paginated). It distinguishes itself well from sibling tools like hires_create_question, hires_get_question, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is given on when to use this tool versus other list tools or alternatives. There is no mention of prerequisites, context, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_question_typesAInspect
List available question types supported by the platform. Use to drive dynamic form builders.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description must cover behavioral traits. It declares a read-only list operation without side effects, but does not mention authentication requirements, rate limits, or output format. Adequate for a simple tool but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no waste. Purpose is front-loaded, and the second sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description provides a clear purpose and usage context. It could optionally specify the return format, but the tool is simple enough that the current description is sufficient for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in the schema, so the description need not add parameter info. Baseline score of 4 applies, and the description does not contradict this.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'List available question types supported by the platform', specifying the verb and resource. The additional phrase 'Use to drive dynamic form builders' provides context that distinguishes it from sibling tools like hires_list_questions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for dynamic form builders, which gives context. However, it does not explicitly state when not to use or mention alternatives among siblings. The guidance is clear but lacks exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_rejection_reasonsAInspect
List configured rejection reasons for the company. Use to validate rejection actions and analytics.
| Name | Required | Description | Default |
|---|---|---|---|
| company_id | No | Target company ID (uses default company when omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies a read-only operation ('List'), but with no annotations provided, it fails to explicitly state its non-destructive nature or any other behavioral traits. It adds minimal context beyond the obvious.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff, front-loaded with the core action and resource. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one optional parameter and no output schema, the description covers the basics but omits return format or pagination details. It is adequate but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already documents the single optional parameter (company_id) with description. The tool description does not add any additional meaning or constraints beyond what the schema provides, so score is baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists configured rejection reasons, which is a specific resource. It distinguishes from sibling list tools by focusing on rejection reasons, and includes a usage hint for validation and analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a usage hint ('Use to validate rejection actions and analytics') but does not specify when not to use this tool or mention alternative tools for similar purposes. More explicit guidelines would improve differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_sourcesAInspect
List candidate sources for the company. Use for attribution sync and reporting consistency.
| Name | Required | Description | Default |
|---|---|---|---|
| company_id | No | Target company ID (uses default company when omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral traits such as read-only nature, side effects, authentication, or rate limits. For a list operation, the lack of mention of safety or pagination is a gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences: one for function, one for purpose. No wasted words, front-loaded and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool without output schema, the description is nearly complete. It lacks only a hint of what the output contains (e.g., list of source names/IDs) but is sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter well-described. The tool description adds no new parameter meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists candidate sources for the company, with a specific verb and resource. It is distinct from sibling tools, all of which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions using for 'attribution sync and reporting consistency', giving context. However, it does not provide explicit when-not-to-use or alternative tools, though no sibling directly overlaps.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_statusesAInspect
List job status labels (draft, published, on_hold, closed, archived). Cache to validate job status updates.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description adds behavioral context by indicating it's a read-only cacheable operation for validation, which is helpful for understanding tool impact.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first defines the purpose, second adds a usage hint. No fluff, efficient communication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple zero-parameter list tool without output schema, the description fully covers what the tool does, what it returns, and a practical usage hint, making it complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters in the input schema, the description adds value by enumerating the exact statuses returned, enhancing understanding of the output beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists job status labels and provides the specific statuses (draft, published, on_hold, closed, archived), making the purpose unambiguous and distinct from sibling tools like hires_list_jobs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It suggests caching to validate job status updates, implying when to use (before status updates). However, it doesn't explicitly exclude other uses or compare to alternatives like hires_set_job_status.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_tagsAInspect
List all tags for the company. Returns paginated results. Recommended to cache for fast tagging UX.
| Name | Required | Description | Default |
|---|---|---|---|
| company_id | No | Target company ID (uses default company when omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses pagination and caching recommendation. No annotations provided, so description carries full burden. Lacks details on pagination mechanism (e.g., page number vs cursor).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, each adding value: purpose, pagination, caching. No filler, well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple list tool with one optional parameter. Mentions pagination and caching, but lacks details on pagination parameters (e.g., page size, how to iterate).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description of the single parameter. Description does not add extra meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'List all tags for the company' with a specific verb and resource. Distinguishes from sibling tools like 'hires_list_candidate_tags' which lists tags per candidate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Recommends caching for fast tagging UX, which gives usage context. Does not explicitly contrast with alternatives, but the purpose is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_template_placeholdersAInspect
List available placeholders for email templates with pagination. Use type to filter by category, q to search by label. Discover placeholders here, then use hires_prepare_template_placeholders to get an HTML tag for insertion.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Filter placeholders by label (case-insensitive substring match) | |
| page | No | Page number (default 1) | |
| size | No | Page size (default 25) | |
| type | No | Filter by placeholder type | |
| company_id | No | Target company ID (uses default company when omitted) | |
| is_notification | No | Include notification-specific system placeholders (0 or 1, default 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose any behavioral traits such as read-only nature, rate limits, or side effects. It mentions pagination but does not elaborate on behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the purpose and then providing usage guidance. Every sentence adds value, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description does not explain what the list returns (e.g., placeholder objects). It adequately covers the discovery workflow but misses details on pagination behavior and return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description reinforces using 'type' and 'q' but does not add new semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists available placeholders for email templates with pagination, and differentiates from the sibling tool `hires_prepare_template_placeholders` by indicating the next step to get an HTML tag.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly tells how to filter using 'type' and 'q', and connects to a sibling tool for the next step, providing good usage context. However, it does not explicitly state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_user_mail_accountsAInspect
List mail accounts connected to a user. Use to resolve from_account_id before creating scheduled emails via POST /candidates/{id}/messages.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | User ID | |
| page | No | Page number (default 1) | |
| size | No | Page size (default 25) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, description is minimal; it implies read-only behavior but does not explicitly state it or mention other traits like permissions or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action, no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers core purpose and use case; pagination is documented in schema so not needed in description; no output schema but response is implied.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers all parameters with descriptions (100% coverage), description adds context linking `from_account_id` to usage but no new parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists mail accounts for a user and distinguishes from siblings by specifying 'user' unlike company mail account tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises using to resolve `from_account_id` before creating scheduled emails, but does not mention when not to use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_usersAInspect
List users for the target company with role context. Returns paginated results useful for access reviews and hiring-team management.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1) | |
| size | No | Page size (default 25) | |
| company_id | No | Company ID to list users for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description mentions pagination and role context but does not disclose behavior for missing parameters, default ordering, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with front-loaded action word and no redundant information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema; description vaguely mentions 'paginated results useful for access reviews' but does not specify return fields or structure, leaving gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema provides full parameter descriptions; description adds general context (role context, pagination) but no additional meaning for individual parameters beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists users for a specific company with role context, distinguishing it from other list tools like hires_list_companies or hires_list_hiring_team.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions usefulness for access reviews and hiring-team management, but does not explicitly exclude cases or compare with sibling tools like hires_list_hiring_team.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_webhooksBInspect
List webhook subscriptions configured at company scope.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Company ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies a read-only operation ('List') but lacks details on pagination, limits, or response format. With no annotations, the description provides minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no unnecessary words. It is concise, though could potentially be more structured with additional details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one parameter and no output schema, the description covers the basics. However, it lacks behavioral details like pagination or scope clarification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the parameter 'id' is described as 'Company ID'. The description adds no further semantic meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List'), the resource ('webhook subscriptions'), and the scope ('at company scope'), which distinguishes it from sibling like 'hires_list_job_webhooks'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'hires_list_job_webhooks'. The description does not mention when company-level vs job-level webhooks are appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_workflowsAInspect
List workflows with embedded stages for the company. Use to build stage-aware integrations and routing rules.
| Name | Required | Description | Default |
|---|---|---|---|
| company_id | No | Target company ID (uses default company when omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral traits such as being read-only, pagination, or permissions. It only states the basic output (lists workflows with stages).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core purpose and a use case, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 optional parameter, no output schema), the description adequately explains the tool's purpose and a typical use case. Could be improved by mentioning the return structure (list of workflows with stages).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for its single parameter (company_id) with a description. The tool description adds no additional meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists workflows with embedded stages, which distinguishes it from the sibling tool hires_list_workflow_stages that likely lists stages alone.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear use case ('build stage-aware integrations and routing rules') but does not explicitly exclude when not to use or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_list_workflow_stagesAInspect
List pipeline stages filtered by workflow or job. Useful for transition UIs and workflow validation.
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | No | Filter stages by job ID (returns stages from the job's assigned workflow) | |
| company_id | No | Target company ID (uses default company when omitted) | |
| workflow_id | No | Filter stages by workflow ID (from hires_list_workflows) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations present, so description carries full burden. It does not disclose read-only nature, authentication needs, rate limits, or side effects. Simply states 'list' without behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with core purpose. No wasted words: first sentence states action and filtering, second sentence provides usefulness. Ideal length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description does not explain return format (e.g., list of stage objects, fields). It covers purpose and filtering but lacks details about response structure. Adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for all 3 parameters. Description adds the phrase 'filtered by workflow or job', which summarizes but does not add meaningful detail beyond what schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'List' and resource 'pipeline stages', with specific filtering options (by workflow or job). It also provides use case context (transition UIs, workflow validation), distinguishing it from siblings like hires_get_workflow_stages.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for listing stages with optional filters but does not explicitly state when to use this tool over alternatives like hires_get_workflow_stages or hires_list_workflows. No when-not or exclusion criteria provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_move_applicationAInspect
Move an application to a specific pipeline stage. Use this for explicit stage transitions in workflow orchestration. You need the target stage_id (get it from the job's pipeline_stages).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. | |
| include | No | Comma-separated relations to embed: candidate, cv.text. | |
| stage_id | Yes | Target pipeline stage ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It only states the action (move) without mentioning side effects, permissions, reversibility, or response format. For a mutation tool, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core action, and every sentence is useful. No fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description should provide more context about return values, side effects, or potential pitfalls. It only covers the basic purpose and a prerequisite, leaving many usage questions unanswered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers all 3 parameters (100% coverage). The description adds guidance for the stage_id parameter but does not mention the 'include' parameter. This adds some value but does not significantly improve upon the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Move an application to a specific pipeline stage', differentiating it from related tools like 'hires_advance_application' (likely for advancing to next stage) and 'hires_transfer_application' (possibly for changing job). It specifies the resource (application) and the destination (pipeline stage).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description tells the user to use this tool for 'explicit stage transitions in workflow orchestration' and provides a prerequisite: get the target stage_id from the job's pipeline_stages. It does not explicitly mention when not to use or name alternatives, which would raise the score to 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_patch_messageAInspect
Partially update a scheduled message before send time. Only provided fields are changed.
| Name | Required | Description | Default |
|---|---|---|---|
| cc | No | Carbon-copy recipient email addresses. | |
| id | Yes | Message ID. | |
| to | No | Primary recipient email addresses. | |
| bcc | No | Blind carbon-copy recipient email addresses. | |
| body | No | Email body as HTML. | |
| subject | No | Email subject line. | |
| scheduled_at | No | Updated send time as a Unix timestamp in seconds. | |
| from_account_id | No | Sending mail account ID. If omitted, the API key owner's default mail account is used. | |
| reply_to_email_id | No | Optional mailbox message ID to reply to. | |
| send_in_new_thread | No | Whether to send the updated message as a new thread. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility. It only states the partial update nature and timing. It does not disclose idempotency, whether the update is reversible, what happens if the message has already been sent, or any authorization requirements. For a mutation tool with 10 parameters, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short at 10 words, efficiently conveying the core purpose and key constraint. No extraneous information. However, it could be slightly more structured (e.g., providing a separation of usage vs. behavior). Still, it earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (10 parameters, no output schema, no annotations), the description is too sparse. It fails to mention return values, error scenarios, or required context (e.g., the message must exist and be scheduled). The schema provides field details, but the description lacks holistic context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage across all 10 parameters. The description adds no additional meaning beyond the schema. According to the rules, baseline is 3 when coverage is high. No extra value provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a partial update on a scheduled message, using the verb 'patch' (implied by 'Partially update'). It specifies the scope ('scheduled message before send time') and behavior ('only provided fields are changed'). This differentiates it from siblings like hires_update_message (full update) and hires_send_candidate_message (send).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'before send time' indicating the timing constraint. However, it does not explicitly state when to use this tool versus hires_update_message or hires_send_candidate_message, nor does it list prerequisites or exclusions. The guidance is implied but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_prepare_template_placeholdersAInspect
Convert a placeholder reference into an HTML tag for insertion into an email template body.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Placeholder type (system, candidate_column, job_variable, questionnaire_link, scheduling_link) | |
| identifier | No | Placeholder identifier | |
| job_variable_id | No | Job variable ID | |
| form_question_id | No | Form question ID | |
| system_column_title | No | System column title | |
| qas_profile_question_id | No | Profile question ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; the description does not disclose behavioral traits beyond the conversion (e.g., side effects, error handling, or that it's a pure function). For a simple transformation, basic clarity is maintained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is clear and to the point, but slightly over-minimal for a tool with 6 parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description does not explain the output format or provide context on how the HTML tag is structured, leaving agents to infer usage, though the sibling tool context helps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (all parameters described), so the baseline is 3. The description adds no extra meaning beyond the schema's parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (convert placeholder reference) and the resource (into HTML tag for email template body), distinguishing it from siblings like 'hires_list_template_placeholders' which lists placeholders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives; usage is implied by the purpose, but no guidance on prerequisites or complementary tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_publish_to_job_boardBInspect
Activate selected job boards for a job. Sets boards to activation queue state. Use for controlled multi-board publishing workflows.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias | |
| boards | No | Array of board identifiers to activate (e.g. ['indeed', 'linkedin']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description must fully disclose behavior. It reveals that boards are set to an 'activation queue state', indicating a queued effect rather than immediate publication. However, it lacks details on permissions, reversibility, idempotency, and side effects, leaving significant gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences) and front-loaded with the primary action. It avoids redundancy. However, it could be more efficiently structured by combining the usage hint with the action statement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no output schema or annotations, the description covers the core function and state change but omits expected return value, error handling, and prerequisites (e.g., job existence, board availability). It is minimally adequate but incomplete for safe autonomous use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents parameters thoroughly. The description does not add new meaning beyond restating 'Activate selected job boards', which is already implied. It meets the baseline but does not enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action: activate selected job boards for a job. It specifies 'Sets boards to activation queue state', adding clarity. However, it does not explicitly differentiate from sibling batch tools like hires_batch_publish_to_boards, which slightly reduces distinctiveness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Use for controlled multi-board publishing workflows', which implies a sequential or selective use case. But it does not mention when not to use this tool versus alternatives (e.g., batch tools), nor does it provide explicit exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_reject_applicationAInspect
Reject an application with an optional rejection reason. Use GET /taxonomy/rejection-reasons to list available reason IDs. Set suppress_notification to skip the rejection email.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. | |
| include | No | Comma-separated relations to embed: candidate, cv.text. | |
| rejection_reason_id | No | Rejection reason ID from GET /taxonomy/rejection-reasons. | |
| suppress_notification | No | Set to true to skip sending the rejection email to the candidate. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavioral traits. It mentions the rejection action and optional settings but fails to disclose potential irreversibility, required permissions, or side effects (e.g., whether unreject is possible).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences that immediately convey the purpose and key optional behaviors. No redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description provides a basic understanding but omits important context: response format, required workflow stage, permissions, or the existence of related tools (e.g., unreject).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes all parameters (100% coverage). The description adds value by explaining how to obtain rejection_reason_id and the effect of suppress_notification, enhancing practical usability.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Reject an application') and its optional parameters (rejection reason, suppress notification). It differentiates from siblings like batch_reject and unreject by focusing on a single application rejection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context for using the tool (e.g., listing rejection reasons via GET endpoint) but does not explicitly guide when to use this versus alternative tools (e.g., batch_reject_applications, unreject_application).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_remove_candidate_tagBInspect
Remove a specific tag from a candidate.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). | |
| tag | Yes | The tag string to remove. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description only states the action without disclosing side effects (e.g., error if tag not found), permission requirements, or confirmations. Lacks behavioral detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no unnecessary words. Though concise, it could be slightly expanded for clarity without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple (2 params, no output schema), but description lacks context on return values, error handling, and prerequisites. Adequate but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100%, so the schema already documents both parameters. Description adds no additional meaning beyond what schema provides, thus baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'remove' and identifies resource 'tag from a candidate'. It clearly distinguishes from sibling tools like 'hires_add_candidate_tags' (add) and 'hires_batch_remove_tags' (batch).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this single-tag removal versus batch removal or other alternatives. No prerequisites or context provided for appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_remove_from_job_boardCInspect
Deactivate selected board publications for a job. Stops the job from being listed on specified boards.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias | |
| boards | No | Array of board identifiers to deactivate (e.g. ['indeed', 'linkedin']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden for behavioral traits. It implies a mutation but does not disclose any side effects, permissions required, reversibility, or response behavior. The description is too brief to be transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise with two sentences and 16 words. It front-loads the purpose. However, it could be slightly more informative without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and minimal annotations, the description is adequate for a simple tool but lacks behavioral and usage context that would make it complete. It tells the basic function but not enough for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond the schema: it simply repeats that boards are selected. No extra context on identifiers or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool deactivates board publications and stops listing on specified boards. It uses a specific verb ('Deactivate') and resource ('board publications for a job'), and the name distinguishes it from siblings like 'hires_publish_to_job_board'. However, it could be more precise about which boards are affected.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'hires_batch_remove_from_boards' or 'hires_publish_to_job_board'. The description does not mention prerequisites, limitations, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_restore_companyAInspect
Restore a previously deleted company and re-enable it for active use. Use for recovery and rollback scenarios.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Company ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must carry behavioral disclosure. It mentions 'restore' and 're-enable', indicating a mutation operation. However, it does not specify prerequisites (e.g., company must be deleted via the delete tool) or what happens if the company is already active. Basic transparency for a restore tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no extraneous words. Every sentence serves a purpose: stating the action and providing usage context. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter input and no output schema, the description sufficiently covers the tool's purpose and usage. It could mention success/failure cues, but overall it's complete for a restore operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds value by implying the ID must refer to a previously deleted company, which is beyond the schema's generic 'Company ID' description. This contextual meaning enhances parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Restore a previously deleted company and re-enable it for active use', providing a specific verb (restore) and resource (company). It distinguishes from sibling tools like 'hires_delete_company' by focusing on recovery and rollback.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes 'Use for recovery and rollback scenarios', which implies when to use the tool. However, it lacks explicit guidance on when not to use it or alternatives for restoring other entities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_send_candidate_messageBInspect
Schedule an email message to a candidate. If scheduled_at is omitted, the message is scheduled for 15 minutes after creation.
| Name | Required | Description | Default |
|---|---|---|---|
| cc | No | Carbon-copy recipient email addresses. | |
| id | Yes | Candidate ID (integer) or alias (string). | |
| to | Yes | Primary recipient email addresses. | |
| bcc | No | Blind carbon-copy recipient email addresses. | |
| body | Yes | Email body as HTML. | |
| subject | Yes | Email subject line. | |
| scheduled_at | No | Unix timestamp (seconds) for when to send. Defaults to 15 minutes after creation. | |
| application_id | No | Optional application ID to link this message to. | |
| from_account_id | No | Sending mail account ID. If omitted, uses the API key owner's default mail account. | |
| reply_to_email_id | No | Optional mailbox message ID to reply to. | |
| send_in_new_thread | No | Send as a new email thread instead of replying in an existing one. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must fully disclose behavioral traits. It only mentions the default scheduling behavior. It omits information on permissions, idempotency, side effects (e.g., message creation), and error handling, leaving significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the core functionality and default scheduling. Every word serves a purpose, with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 11 parameters and no output schema, the description is too brief to provide complete context. It does not explain the overall workflow, response format, or validation behavior, leaving the agent underinformed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the default behavior of scheduled_at, but does not otherwise supplement the parameter meanings already in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Schedule an email message to a candidate'), with a specific verb and resource. It distinguishes from sibling messaging tools by focusing on scheduling and candidate targeting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like hires_batch_create_messages or hires_create_email_template. No when-not-to-use or prerequisite details are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_set_job_statusBInspect
Change job status via dedicated endpoint. Recommended for publish/unpublish/archive transitions and status automation workflows.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias | |
| status | Yes | New job status (e.g. Draft, Public, Archived). See GET /taxonomy/statuses. | |
| include | No | Comma-separated related resources to embed: workflow, hiring_team, pipeline_stages |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must bear the full burden. It only states the tool changes status via a dedicated endpoint, but does not disclose side effects, reversibility, permissions, or rate limits. This is a significant gap for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (one sentence with two clauses) and front-loaded with the action. It could be slightly more structured but is efficient overall.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, no output schema, and no annotations, the description covers purpose and recommended use but lacks behavioral details. It is adequate but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds no additional meaning beyond what is already in the input schema; it simply repeats the action.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (change job status) and resource (job), and specifies recommended use cases (publish/unpublish/archive), making purpose clear. However, it does not explicitly differentiate from sibling tool 'hires_update_job' which may also change status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit recommendations for when to use the tool (publish/unpublish/archive transitions and automation workflows). It implies alternatives exist but does not list exclusions or directly compare to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_submit_career_applicationAInspect
Submit a job application on behalf of a candidate. Creates a candidate record and triggers the career-site pipeline automation.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Applicant email address | ||
| phone | No | Applicant phone number | |
| job_id | Yes | Job ID to apply to | |
| resume | No | Resume file upload (base64 encoded) | |
| source | No | Application source identifier | |
| answers | No | Array of form answer objects | |
| last_name | Yes | Applicant last name | |
| first_name | Yes | Applicant first name | |
| company_slug | Yes | Company slug identifying the career site | |
| linkedin_url | No | Applicant LinkedIn profile URL |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full responsibility for disclosing behavioral traits. It explicitly states that it 'Creates a candidate record and triggers the career-site pipeline automation,' which are key side effects beyond the basic submission. However, it does not mention idempotency, duplicate handling, or authentication requirements. The description is reasonably transparent for a submission tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—two sentences that cover action, effect, and context without any extraneous words. Every sentence adds value: the first states the action, the second explains the aftermath. This is an ideal length for a tool description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (10 parameters, 5 required, no output schema), the description is incomplete. It does not clarify what the return value is (e.g., application ID), how the 'career-site pipeline automation' behaves, or whether the candidate record creation is handled idempotently. Important behavioral details like error handling or rate limits are omitted. The description is too minimal for a tool with many parameters and no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description does not add new meaning beyond what the schema already provides for parameters. It gives context (e.g., 'on behalf of a candidate') but does not clarify parameter formats, dependencies, or defaults. Thus, the description does not significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Submit a job application on behalf of a candidate' and elaborates on the effects ('Creates a candidate record and triggers the career-site pipeline automation'). This distinguishes it from sibling tools like hires_create_application (which may only create an application record) and hires_create_candidate (which only creates a candidate). The verb 'submit' combined with 'on behalf of' signals agent action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives. It does not mention conditions, pre-requisites, or exclusions. For example, it does not clarify when to prefer hires_submit_career_application over hires_create_application or hires_create_candidate. The agent must infer use cases from the name and description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_submit_feedbackAInspect
Submit structured API feedback about missing features, issues, or workflow improvements. Rate limited to 5 requests per hour.
| Name | Required | Description | Default |
|---|---|---|---|
| context | No | Arbitrary context object (max 4KB JSON) | |
| endpoint | No | The API endpoint this feedback relates to, e.g. /v2/candidates | |
| issue_type | No | Category of the issue | |
| description | Yes | Description of the issue or feedback (max 2000 chars) | |
| suggested_improvement | No | Suggested solution or improvement (max 2000 chars) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description discloses the rate limit, a behavioral trait. It does not mention side effects, confidentiality, or processing details, but for a feedback tool, the primary behavioral constraint is well-covered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose, second adds rate limit. Every word earns its place, and the key information is front-loaded. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description could explain what happens after submission (e.g., acknowledgment, storage). However, the rate limit and clear purpose make it mostly complete for a feedback tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds only that feedback is about API features, which is implicit from the tool name. No additional parameter semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'submit' and resource 'structured API feedback', specifying it covers missing features, issues, or workflow improvements. This distinguishes it from sibling tools focused on CRUD operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the rate limit of 5 requests per hour, providing a key usage constraint. However, it does not explicitly state when to use this tool over alternatives, though the context implies it's for feedback only.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_transfer_applicationBInspect
Transfer an application to another job. A new application is created on the target job. Optionally specify a stage on the target job's pipeline.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID to transfer. | |
| job_id | Yes | Target job ID to transfer the application to. | |
| include | No | Comma-separated relations to embed: candidate, cv.text. | |
| stage_id | No | Pipeline stage ID on the target job. If omitted, defaults to the first stage. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states a new application is created but does not clarify if the original application is removed or remains, which is critical for understanding side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words, front-loaded with the core action. Every sentence is meaningful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Missing key information: what happens to the original application, return value (likely the new application), and any prerequisites or constraints. For a mutation tool with no output schema, this is insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds context for stage_id ('Optionally specify a stage on the target job's pipeline') but does not significantly enhance understanding of other parameters beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool transfers an application to another job, creating a new application on the target job. It distinguishes from sibling tools like hires_move_application (likely within same job) and hires_create_application.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for moving applications between jobs but lacks explicit guidance on when to use versus alternatives or when not to use. Sibling tools like hires_reject_application exist but no exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_unreject_applicationAInspect
Undo a rejection and reopen a previously rejected application. The status returns to active and rejected_at is cleared.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. | |
| include | No | Comma-separated relations to embed: candidate, cv.text. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description transparently discloses the behavioral effects: the status returns to active and rejected_at is cleared. It does not mention potential side effects like affecting related data or requiring specific permissions, but the core mutation is well-described.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys purpose and effect without unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and absence of an output schema, the description covers the essential behavior. It could mention the return value (e.g., updated application), but the current level is sufficient for understanding the action.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides descriptions for both parameters ('Application ID.' and 'Comma-separated relations to embed: candidate, cv.text.'), achieving 100% coverage. The description adds no further semantic meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('undo a rejection and reopen') and specifies the exact state changes: status returns to active and rejected_at is cleared. It distinguishes itself from the sibling 'hires_reject_application' by being its inverse.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for previously rejected applications by stating 'undo a rejection and reopen a previously rejected application.' It provides clear context of use but does not explicitly exclude other states or mention alternatives like batch operations (which don't exist among siblings).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_applicationAInspect
Update application fields such as stage, disqualification flag, and CV. For explicit stage transitions prefer hires_move_application or hires_advance_application.
| Name | Required | Description | Default |
|---|---|---|---|
| cv | No | Replace or attach a CV. | |
| id | Yes | Application ID. | |
| include | No | Comma-separated relations to embed: candidate, cv.text. | |
| stage_id | No | Move application to this pipeline stage. | |
| is_disqualified | No | Set to true to disqualify the candidate on this application. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. However, it only lists fields to update without mentioning side effects, idempotency, or other behaviors. For a mutation tool, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences long, front-loaded with purpose, and no wasted words. Efficiently conveys what the tool does and when to use alternatives.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is brief and does not cover important context like behavior of is_disqualified (can both set and unset?), cv replacement rules, or that id is required. Given no output schema and no annotations, more detail would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents each parameter. The description lists 'stage, disqualification flag, and CV' which correspond to schema fields, but adds no additional meaning beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool updates application fields like stage, disqualification flag, and CV. It also distinguishes itself from sibling tools by directing users to hires_move_application or hires_advance_application for explicit stage transitions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to prefer alternative tools for stage transitions, providing clear guidance on when not to use this tool. This helps the agent select the correct tool for the intended operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_candidateBInspect
Update candidate fields, profile answers, and optional CV. Used for bi-directional sync from ATS, CRM, sourcing, or enrichment tools.
| Name | Required | Description | Default |
|---|---|---|---|
| cv | No | CV/resume file to attach (base64 payload). | |
| id | Yes | Candidate ID (integer) or alias (string). | |
| No | Candidate email address. | ||
| phone | No | Candidate phone number. | |
| job_id | No | Job ID to create a new application for this candidate. | |
| profile | No | Key-value map of profile field answers. Keys can be question text or question_id. | |
| stage_id | No | Pipeline stage ID for the application. Requires job_id. | |
| last_name | No | Candidate last name. | |
| first_name | No | Candidate first name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only lists updatable fields without details on overwrite behavior, permissions, or side effects. Significant gaps remain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences efficiently convey purpose and typical use. No extraneous content, though structured formatting (e.g., bullets) could improve scannability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
As a mutation tool with 9 parameters and no output schema, the description is too brief. It omits return value, side effects, and confirmation of success, leaving agents with incomplete expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with each parameter described. The description adds overall context but no additional meaning per parameter beyond schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool updates candidate fields, profile answers, and CV, differentiating it from create/delete/read siblings. Specific verb 'update' and resource 'candidate' make purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions use for bi-directional sync from ATS/CRM, providing context but no explicit when-not-to-use or alternatives. Usage guidance is implied but lacks direct contrast with other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_companyBInspect
Update company profile, owner contact data, and optional logo. Supports partner-operated account management.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Company ID | |
| url | No | Company profile URL | |
| logo | No | Company logo file | |
| name | No | Company name | |
| website | No | Company website URL | |
| company_owner_name | No | Company owner full name | |
| is_staffing_agency | No | Whether this company is a staffing agency | |
| company_owner_email | No | Company owner email address | |
| company_owner_phone | No | Company owner phone number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description does not disclose side effects, authentication needs, or error behavior. It only states the action without detailing what happens on failure or success.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no redundant information. Every word adds value, and the description is front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite 9 parameters and no output schema or annotations, the description does not cover behavioral context like return values or prerequisites. It is too brief for a complex mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so all parameters are well-documented in the schema. The description adds no new semantic information beyond grouping parameters into 'company profile, owner contact data, and logo'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Update company profile, owner contact data, and optional logo', providing a specific verb and resource. It distinguishes from siblings like hires_create_company by focusing on updating rather than creating, and mentions 'partner-operated account management' for additional context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use or when not to use. No mention of prerequisites, alternatives, or exclusions. The phrase 'Supports partner-operated account management' hints at a use case but is not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_email_templateAInspect
Update an existing email template. Only provided fields are overwritten; omitted fields keep their current values. To add placeholders, use the same workflow as creation.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Email template ID | |
| body | No | Email body HTML (supports placeholders) | |
| name | No | Template name | |
| subject | No | Email subject line (supports placeholders) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavior. It reveals partial update behavior and placeholder workflow but omits potential side effects, permissions, or state changes beyond the update itself.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences convey the key points (partial update, placeholder workflow) without unnecessary detail. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple update tool with no output schema and no annotations, the description is sufficient. It explains the update semantics and differentiates from creation. Minor gap: no mention of required ID existence.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds no extra parameter meaning beyond explaining the update behavior, which is already implied by the tool name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Update an existing email template', specifying the verb (update) and resource (email template). It distinguishes from siblings like create and delete by naming the action and referencing creation for placeholders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes partial update semantics ('only provided fields are overwritten') and directs placeholder addition to creation workflow. However, it does not explicitly state when to use this vs. other template tools, though context makes it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_formCInspect
Update form name and question composition.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Form ID. | |
| name | Yes | Form name. | |
| questions | No | Array of question IDs to attach to this form. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears full responsibility for behavioral disclosure. It mentions 'update' but does not clarify whether the update is incremental (merge) or full replacement, especially for the questions array. The agent is left to assume the behavior without knowing if it's destructive (e.g., overwriting questions).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that delivers the core purpose without extraneous words. It is front-loaded with the action and resource. However, it could be slightly expanded to improve clarity without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there is no output schema and no annotations, the description should compensate by explaining more about the update behavior, return value, and prerequisites (e.g., form must exist). The current description leaves significant gaps, making the tool feel incomplete for an agent to use safely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, meaning each parameter is clearly described in the schema. The description adds marginal value by restating 'name and question composition,' but does not provide additional context beyond what the schema already offers. The required parameters are also clearly indicated.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and the resource ('form'), and specifies the aspects that can be updated ('name and question composition'). It distinguishes from sibling tools like create_form, delete_form, and get_form. However, it does not explicitly indicate that it updates an existing form, which could be inferred but not explicitly stated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as update_form_question or create_form. There is no mention of prerequisites, expected use cases, or when not to use it. This omission makes it difficult for an agent to decide between sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_form_questionAInspect
Update the status (required/optional/hidden) of a question inside a form.
| Name | Required | Description | Default |
|---|---|---|---|
| status | Yes | Question visibility on this form: required, optional, or hidden. | |
| form_id | Yes | Form ID. | |
| question_id | Yes | Question ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility for behavioral disclosure. It only states the basic action (update status) without mentioning side effects, permissions, or what happens when status changes (e.g., impact on existing responses). This is insufficient for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that efficiently communicates the tool's purpose with no wasted words. It is appropriately front-loaded with the key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (3 parameters, no nested objects), the description covers the basic purpose but lacks information about return values or post-update behavior. The absence of an output schema means the agent has no guidance on what the tool returns, leaving some incompleteness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description repeats the enum values present in the schema ('required', 'optional', 'hidden') without adding new meaning beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (update), resource (question inside a form), and the specific aspect (status). It distinguishes itself from siblings like hires_update_form and hires_update_question by focusing on the status of a question within a form.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description implies the context (updating question status in a form) but does not mention exclusions or when not to use it. This leaves the agent to infer usage from the tool name and schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_jobBInspect
Update mutable job attributes. Only send fields you want to change. Preserves domain-level validation rules.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Job ID (numeric) or alias | |
| title | No | Public job title. | |
| status | No | Job status (e.g. Draft, Public). See GET /taxonomy/statuses. | |
| form_id | No | Application form ID to assign to this job. | |
| include | No | Comma-separated related resources to embed: workflow, hiring_team, pipeline_stages | |
| is_remote | No | Whether this is a remote position. | |
| salary_max | No | Maximum salary. | |
| salary_min | No | Minimum salary. | |
| category_id | No | Job category ID from GET /taxonomy/categories. | |
| description | No | Job description (HTML allowed). | |
| workflow_id | No | Workflow ID to assign to this job. | |
| department_id | No | Department ID from GET /taxonomy/departments. | |
| location_city | No | Job city. | |
| parent_job_id | No | Canonical parent job ID. If provided, the job becomes a satellite job. | |
| salary_period | No | Salary period. | |
| internal_title | No | Internal-only title visible to the hiring team. | |
| location_state | No | Job state or region. | |
| internal_job_id | No | External reference ID from your ATS or HR system. | |
| salary_currency | No | Salary currency code (e.g. USD, EUR). | |
| location_country | No | Job country. | |
| education_level_id | No | Education level ID from GET /taxonomy/education-levels. | |
| employment_type_id | No | Employment type ID from GET /taxonomy/employment-types. | |
| knockout_questions | No | Boolean knockout questions added to the application form. | |
| experience_level_id | No | Experience level ID from GET /taxonomy/experience-levels. | |
| resume_field_status | No | Resume field behavior on the application form. | |
| location_postal_code | No | Postal or ZIP code. | |
| location_full_address | No | Full formatted address. | |
| location_street_address | No | Street address. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full behavioral burden. It mentions mutation and validation but omits permissions, side effects (e.g., webhook triggers), rate limits, or error behaviors. This is insufficient for a complex update tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three concise sentences, no redundant information, and front-loaded with the core purpose. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite a complex tool with 28 parameters and no output schema, the description is very brief. It does not explain return values, success/failure behavior, prerequisites, or how to use the id parameter (required). Lack of completeness for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already explains all 28 parameters. The description adds minimal value beyond the partial update instruction, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool updates mutable job attributes, specifies partial updates ('Only send fields you want to change'), and mentions domain validation. This distinguishes it from create, delete, and other job-specific tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises to only send changed fields, which is good practice for updates. However, it does not explicitly contrast with sibling tools like hires_set_job_status or hires_create_job to guide when to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_messageAInspect
Fully update (replace) a scheduled message before send time. All required fields must be provided.
| Name | Required | Description | Default |
|---|---|---|---|
| cc | No | Carbon-copy recipient email addresses. | |
| id | Yes | Message ID. | |
| to | Yes | Primary recipient email addresses. | |
| bcc | No | Blind carbon-copy recipient email addresses. | |
| body | Yes | Email body as HTML. | |
| subject | Yes | Email subject line. | |
| scheduled_at | No | Updated send time as a Unix timestamp in seconds. | |
| from_account_id | No | Sending mail account ID. If omitted, the API key owner's default mail account is used. | |
| reply_to_email_id | No | Optional mailbox message ID to reply to. | |
| send_in_new_thread | No | Whether to send the updated message as a new thread. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It only states it updates/replaces a message before send time but does not disclose side effects (e.g., cancellation of scheduled notifications), permissions needed, or behavior if used after send time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler. The first sentence states the purpose, the second provides a key constraint. Every word contributes meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is minimal for a complex tool with no output schema and no annotations. It doesn't explain update behavior beyond replacement, nor what the response is. Parameter descriptions in schema are complete, but overall context lacks detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds 'All required fields must be provided', which is already implied by the schema's 'required' array. No additional meaning is provided for any parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Fully update (replace) a scheduled message', specifying both the verb (update) and the resource (scheduled message). It distinguishes the tool from the sibling 'hires_patch_message' by using 'Fully update' to imply a replacement rather than a partial update.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes 'before send time' and 'All required fields must be provided', providing clear context for when to use. It implies contrast with a partial update but does not explicitly name the sibling 'hires_patch_message' or state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_noteAInspect
Update note body and/or visibility without creating a new timeline item. Use for corrections and moderation workflows.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Note ID | |
| body | No | Note content. Supports HTML. | |
| include | No | Include related resources, e.g. 'user' for author details | |
| visibility | No | Visibility: 'all' (default) or 'private' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavioral traits. It mentions 'without creating a new timeline item' but lacks details on destructive effects, permissions, rate limits, or side effects. Missing info on required permissions or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences: cognitive load is low. The first sentence is a clear action statement, the second provides usage context. No wasted words; every part serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters, no output schema, and moderate complexity, the description covers core purpose but omits details on 'include' parameter and behavior when fields are omitted. Lacks completeness for a fully autonomous agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with descriptions, so baseline is 3. The description mentions 'body and/or visibility' but not the 'id' or 'include' parameters, adding minimal meaning beyond schema. It only slightly improves understanding of which fields are updatable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Update note body and/or visibility without creating a new timeline item,' specifying the verb (update) and resource (note). It distinguishes from create_note by emphasizing no new timeline item, making purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use for corrections and moderation workflows,' providing clear context. While it doesn't list exclusions or alternatives, the sibling names (create_note, delete_note) imply when not to use (e.g., for new notes or deletions).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_notification_messageAInspect
Update a scheduled notification email before it is sent. Change subject, body, and optionally reschedule the send time. Only scheduled (not yet sent) messages can be updated.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Notification email message ID. | |
| body | Yes | Email body as HTML. | |
| subject | Yes | Email subject line. | |
| scheduled_at | No | Unix timestamp (seconds) to reschedule send time. If omitted, the existing schedule is preserved. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the update action, modifiable fields (subject, body, scheduled_at), and the constraint that only unsent messages can be updated. However, it lacks details on idempotency, side effects, return value, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the main purpose and constraint. Every sentence is essential, with no redundant or verbose phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description should hint at what the tool returns or when it fails (e.g., if the message is already sent). It does not mention return values or error conditions. It is adequate for basic use but incomplete for advanced scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context by mentioning 'optionally reschedule the send time' for the scheduled_at parameter, clarifying its optional nature and purpose beyond the schema description. This adds value beyond the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Update', the resource 'scheduled notification email', and the specific actions (change subject, body, optionally reschedule). It also distinguishes from siblings by noting that only unscheduled (not yet sent) messages can be updated, which differentiates it from deletion or retrieval tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Update a scheduled notification email before it is sent' and 'Only scheduled (not yet sent) messages can be updated', providing clear guidance on when to use the tool. However, it does not mention alternative tools for already-sent messages, such as deletion or resending.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_nurture_campaignAInspect
Update an existing nurture campaign. Pass all steps -- mark removed steps with is_deleted=true. Existing steps must include their id.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Nurture campaign ID | |
| steps | Yes | All steps -- mark removed steps with is_deleted=true | |
| title | Yes | Campaign name | |
| stage_id | No | Pipeline stage ID that triggers the campaign | |
| timezone | No | Timezone for scheduled sends, e.g. "America/New_York" | |
| delay_time | No | Delay in minutes before the first step | |
| send_to_all | No | Whether to send to all candidates or only new ones | |
| workflow_id | No | Workflow ID this campaign is associated with | |
| relative_days | No | Number of days offset for scheduling | |
| relative_time | No | Time of day for scheduled sends | |
| response_move_to_stage_id | No | Stage ID to move candidates to when they respond |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It explains the key behavioral trait: the update requires passing all steps, with removed steps marked as is_deleted=true, and existing steps must include their id. This is critical for correct usage. However, it omits other behavioral details like return value or atomicity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Front-loaded with the action and key rule. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the main complexity (step handling) but does not mention return value or error conditions. For a tool with many parameters and no output schema, it leaves some gaps. Could be more complete with a line about what the response looks like.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value beyond the schema by clarifying the requirement to pass the full steps array and the use of is_deleted and id. This goes beyond field descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update an existing nurture campaign') and the resource. It provides specific guidance about how to handle steps (pass all steps, mark removed steps with is_deleted=true, include ids for existing steps). This distinguishes it from sibling create/delete tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (to update an existing campaign) but does not explicitly state when not to use or provide comparisons to alternatives like create_campaign or delete_campaign. It gives usage context but lacks explicit guidelines.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_update_questionAInspect
Update text, type, or options of an existing question definition.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Question ID | |
| text | Yes | Question text | |
| type | Yes | Question type (from hires_list_question_types) | |
| options | No | Answer options (for select/multiselect question types) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description bears full burden. It states 'update' but does not disclose any side effects, required permissions, or behavior when changing types (e.g., effect on existing options). Minimal transparency beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that conveys the essential information without any extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple update tool with 4 parameters and no output schema, the description is adequate but lacks important context. It does not clarify that 'options' only applies to certain types (though schema indicates it), or that the 'id' must exist. Could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so the description adds little beyond listing the fields to update. The schema already describes each parameter. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('update') and the resource ('existing question definition'), specifying what can be updated ('text, type, or options'). It distinguishes from sibling tools like 'hires_create_question' and 'hires_delete_question'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use or when not to use this tool. The usage is implied (when modifying an existing question), but alternatives or exclusions are not mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_upload_application_attachmentBInspect
Upload a file attachment to an application. Provide the file as base64-encoded data. Commonly used for signed documents and interviewer artifacts.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Application ID. | |
| file | Yes | File to upload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. While it indicates the tool uploads a file (mutation), it does not disclose behavioral traits such as required permissions, file size limits, whether it replaces existing attachments, or what the response looks like. This is a significant gap for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences: the first states the core purpose and the second provides encoding guidance and common use cases. Every word is necessary, and it is front-loaded. No wasted content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should cover return values, constraints (e.g., file size), and prerequisites. It explains the file format but omits what happens after upload, any side effects, or required permissions. The tool involves a nested object parameter, so more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, so the schema already documents all parameters clearly. The description adds minor value by reiterating base64 encoding and giving example use cases (signed documents, artifacts), but it does not add meaning beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Upload a file attachment to an application.' The verb 'upload' and resource 'file attachment' are specific, and the context 'to an application' distinguishes it from sibling tools like 'hires_upload_attachment' or 'hires_download_attachment'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions common use cases ('signed documents and interviewer artifacts') and the requirement to provide base64-encoded data. However, it does not explicitly state when to use this tool versus alternatives (e.g., hires_upload_attachment for general uploads, or hires_upload_candidate_file for candidate-level files), nor does it provide when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_upload_attachmentAInspect
Upload a file and create an attachment. Supported categories: voicemail (wav/mp3, max 20 MB, no object_id — returned uuid is usable as attachment_uuid in nurture voicemail steps); candidate (candidate ID); application (application ID); candidate_comment (comment ID); job_note (job-note ID); company_favicon/company_header/company_link_preview (company ID). Object ownership is strictly verified against the authenticated API key's company. Returns {uuid, url, file, relative_time}.
| Name | Required | Description | Default |
|---|---|---|---|
| file | Yes | File payload. | |
| category | Yes | Attachment category. Determines allowed extensions and object_id semantics. | |
| object_id | No | Target object ID (candidate/application/comment/job-note/company, per category). Omit for `voicemail`. | |
| company_id | No | Target company ID. Needed for partner API keys managing multiple client companies. Omitted → defaults to the authenticated company. The object_id must belong to this company (strict match). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses file constraints (max 20 MB, wav/mp3 for voicemail), ownership verification ('strictly verified against the authenticated API key's company'), and return structure. This is valuable as no annotations are provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three moderately long sentences cover all necessary information without redundancy. The info is front-loaded (first sentence states core purpose), but the enumeration of categories could be more compact.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers input parameters, file constraints, ownership, and return values. No output schema exists, but the return structure is specified. It adequately addresses the tool's complexity (nested file object, conditional object_id).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds critical context for each parameter: category values and their implications, object_id relation to category, company_id default behavior. For example, voicemail's uuid usage in nurture steps is explained, adding meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with 'Upload a file and create an attachment,' clearly stating the action and resource. It lists all 8 supported categories with specific constraints (file types, max size, object_id requirements), distinguishing it from sibling tools like hires_upload_application_attachment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides per-category guidance on when to omit object_id (voicemail) and specifies which ID type to use. It also explains company_id behavior. However, it does not explicitly contrast with more specific upload tools, leaving some ambiguity about when to use this general tool vs hires_upload_application_attachment.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hires_upload_candidate_fileAInspect
Upload a file for a candidate using a base64 payload. Used for resume ingestion, portfolio uploads, and document attachment.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Candidate ID (integer) or alias (string). | |
| file | Yes | File to upload (base64 payload). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions using base64 payloads but omits details about side effects (e.g., overwriting existing files), authentication requirements, rate limits, or file size constraints. With no annotations provided, more behavioral context is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, front-loading the action and use cases. It avoids redundancy, though a slightly structured format could improve readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is brief for a tool with nested parameters and no output schema. It does not explain return values, error handling, prerequisites (e.g., candidate must exist), or success/failure indicators, leaving gaps in agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for all parameters. The description adds minimal value beyond stating the base64 payload method; it does not elaborate on parameter formats, constraints, or interactions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (upload), resource (file for a candidate), and method (base64 payload). It lists specific use cases (resume ingestion, portfolio uploads, document attachment) that distinguish it from sibling upload tools like hires_upload_application_attachment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage scenarios but does not explicitly state when to use this tool over alternatives. It lacks exclusions or conditions for use, and does not mention related tools like hires_upload_attachment for differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!