Skip to main content
Glama

Server Details

Search remote jobs, post job listings, find remote candidates, check salary benchmarks, and manage your career, all through AI conversation. The Himalayas MCP server connects your AI assistant to the Himalayas remote jobs marketplace in real time.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 41 of 41 tools scored. Lowest: 2.8/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between employer and general user actions that could cause confusion. For example, 'get_company_details' and 'get_company_profile' serve similar functions but target different users, and 'post_job_public' and 'create_company_job' both create jobs but with different authentication requirements. Descriptions help clarify, but the boundaries are not always clear at first glance.

Naming Consistency5/5

Tool names follow a highly consistent verb_noun pattern throughout, such as 'add_company_perk', 'get_job_details', and 'update_company_job'. There are no deviations in naming conventions, making the set predictable and easy to parse for agents.

Tool Count2/5

With 41 tools, the count is excessive for a job board server, leading to potential overwhelm and redundancy. Many tools could be consolidated, such as multiple update functions or separate get operations for similar resources, making the surface feel bloated and less user-friendly.

Completeness5/5

The tool set provides comprehensive coverage for the Himalayas remote jobs domain, including job browsing, company management, talent search, profile updates, and messaging. It supports full CRUD operations for key entities like jobs, profiles, and conversations, with no obvious gaps that would hinder agent workflows.

Available Tools

41 tools
add_company_perkBInspect

Add a perk/benefit to your company on Himalayas. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesPerk title (3-50 characters)
categoryYesPerk category
descriptionYesPerk description (15-450 characters)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It successfully notes the authentication requirement but omits other behavioral traits: whether the addition is immediate or pending review, reversibility (despite 'remove_company_perk' existing), side effects on company profile, or return value structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes purpose, second states auth requirement. Appropriately front-loaded and sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a straightforward 3-parameter creation tool with complete schema coverage. The authentication caveat fills the critical gap left by absent annotations. Could strengthen by noting relationship to 'get_company_perks' or profile visibility, but sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (title, category, description all documented). Description adds no parameter-specific semantics beyond the schema, but baseline 3 is appropriate given the schema's completeness. No syntax hints or parameter interdependencies are described.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Add') and resource ('perk/benefit') with platform context ('on Himalayas'). Does not explicitly differentiate from sibling tools like 'remove_company_perk' or 'get_company_perks', though the verb distinction is implicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides critical usage constraint ('Requires employer authentication') but lacks explicit guidance on when to use this versus alternatives like 'remove_company_perk' or 'update_company_profile'. The auth requirement implies the 'when' (employer context) but no comparison logic is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

add_educationBInspect

Add an education entry to your Himalayas profile. Requires authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldNoField of study (e.g., 'Computer Science')
gradeNoGrade or GPA
degreeNoDegree (e.g., 'Bachelor of Science')
schoolYesSchool or institution name
currentNoWhether you are currently enrolled
end_yearNoEnd year (omit if currently enrolled)
activitiesNoActivities and societies
start_yearNoStart year
descriptionNoDescription or achievements
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions authentication requirements but fails to disclose mutation characteristics (persistence, idempotency), error conditions, or whether duplicate entries are permitted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact at two sentences and front-loads the core action. However, 'Requires authentication' could be integrated more smoothly or expanded slightly to specify the auth method without significantly impacting length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich schema with complete parameter descriptions, the minimal description is adequate for basic invocation. However, for a mutation tool with no output schema and no annotations, it lacks disclosure of success indicators, returned data, or side effects on the profile.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 9 parameters. The description adds no parameter-specific guidance (e.g., that 'school' is required or that 'end_year' should be omitted if 'current' is true), meriting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Add') and resource ('education entry') with clear scope ('your Himalayas profile'). It implicitly distinguishes from sibling 'add_experience' by resource type, though it doesn't explicitly differentiate from 'update_profile' for profile modifications.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states 'Requires authentication,' indicating a prerequisite for use. However, it lacks explicit guidance on when to use this versus 'update_profile' or whether this appends to versus replaces existing education data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

add_experienceBInspect

Add a work experience to your Himalayas profile. Requires authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesJob title
end_dateNoEnd date (YYYY-MM-DD format, omit for current role)
locationNoJob location
start_dateNoStart date (YYYY-MM-DD format)
current_jobNoWhether this is your current job
descriptionYesDescription of your role and responsibilities
company_nameYesCompany name
employment_typeYesEmployment type
experience_skill_listNoComma-separated skills used in this role
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the authentication requirement but fails to mention what the tool returns (ID, object, or success boolean), error conditions, whether the operation is idempotent, or side effects beyond creation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero redundancy. The primary purpose is front-loaded in the first sentence, and the authentication constraint follows logically.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 9-parameter complexity and lack of output schema or annotations, the description is minimally adequate. It covers the core action and auth, but omits return value documentation and relationships between parameters (e.g., current_job vs end_date logic).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear semantics for all 9 parameters (e.g., date formats, enum values). The description adds no parameter-specific context, but the high schema coverage means it doesn't need to; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Add', the resource 'work experience', and the target 'Himalayas profile'. It implicitly distinguishes from sibling tools like add_education by specifying 'work experience' versus education, though it could explicitly differentiate from profile update tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'Requires authentication' which is a prerequisite, but provides no guidance on when to use this tool versus alternatives like update_profile, or whether the profile must exist first. No explicit when/when-not guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_job_payment_statusAInspect

Check the payment status of a job posting. Use the session_id returned from post_job_public or create_company_job with extras. No authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYesStripe checkout session ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds valuable auth context ('No authentication required') and implies read-only nature. However, lacks disclosure of return format, possible error states, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences with zero waste: purpose first, then prerequisite source, then auth requirement. Front-loaded structure makes intent immediately clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter status checker with no output schema, description adequately covers purpose, prerequisites, and auth. Minor gap: does not hint at expected return values (e.g., 'paid', 'pending', 'failed') which would help the agent handle responses.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (session_id described as 'Stripe checkout session ID'), establishing baseline 3. Description adds provenance context by specifying the session_id comes from post_job_public or create_company_job, helping the agent understand parameter source.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Check' with clear resource 'payment status of a job posting' distinguishes this from sibling tools like post_job_public or create_company_job which create jobs rather than check payment status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisite ('Use the session_id returned from post_job_public or create_company_job with extras') and identifies specific sibling tools that must be called first. Also notes 'No authentication required' which is a critical usage constraint.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_company_jobAInspect

Post a new job on Himalayas. Jobs are free to post and require admin approval before going live. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
draftNoSave as draft instead of submitting for approval
titleYesJob title (5-80 characters)
extrasNoPaid extras: 'sticky' ($199 pin to top for 30 days), 'newsletter' ($99 feature in weekly email for 30 days)
seniorityYesSeniority levels
max_salaryNoMaximum salary
skill_listNoComma-separated skills
base_salaryNoMinimum salary
descriptionYesJob description (350+ characters, can include HTML)
category_listNoComma-separated job categories
valid_throughNoExpiration date (ISO format, defaults to 30 days from now)
salary_countryNoSalary currency code (default: USD)
employment_typeYesEmployment type
app_link_or_emailYesApplication URL or email address — backend auto-detects via @ check
screening_questionsNoScreening questions for applicants
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the approval workflow and authentication requirements, but omits other behavioral traits such as return values, rate limits, side effects, or what constitutes a successful invocation beyond submission.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three tightly written sentences with zero redundancy. Information is front-loaded with the core action ('Post a new job'), followed by cost/approval details, and ending with authentication requirements. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 14-parameter creation tool with no output schema, the description covers the essential business logic (approval workflow, authentication) but remains incomplete regarding the return value or success indicators. It adequately covers the complexity but leaves gaps around output semantics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, establishing a baseline of 3. The description does not add semantic details about specific parameters (e.g., explaining the draft flag's interaction with the approval workflow, or the email/URL auto-detection logic), but given the comprehensive schema, it does not need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Post') and resource ('new job') on the specific platform ('Himalayas'). It implicitly distinguishes from sibling tools like update_company_job and delete_company_job by specifying 'new', though it does not explicitly differentiate from post_job_public.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides important prerequisites ('Requires employer authentication') and workflow constraints ('require admin approval before going live'), which help determine when the tool is applicable. However, it lacks explicit guidance contrasting this tool with similar siblings like post_job_public or stating when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_company_jobAInspect

Delete a job posting from your company on Himalayas. This action cannot be undone. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_slugYesJob slug to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Effectively discloses destructive nature ('cannot be undone') and authentication requirements. Good safety disclosure for a deletion operation, though could specify what happens to associated applications or data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: action statement, irreversibility warning, and auth requirement. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple single-parameter deletion tool. Covers core purpose, permanent consequences, and access control. No output schema exists, but description adequately prepares the agent for the operation's nature without needing return value documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with job_slug described as 'Job slug to delete'. Description adds no parameter-specific guidance, but baseline 3 is appropriate since schema already fully documents the single required parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Delete' + resource 'job posting' + scope 'from your company on Himalayas'. Clearly distinguishes from siblings like create_company_job, update_company_job, and delete_conversation through the specific resource and company context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States prerequisite 'Requires employer authentication' but lacks explicit guidance on when to choose this over alternatives like update_job_status (archiving vs deletion). Usage is implied through the auth requirement rather than explicitly guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_conversationAInspect

Delete a conversation. Accepts room_name or talent_slug. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_nameNoRoom name from list_conversations
talent_slugNoTalent slug (from search_talent results) — resolves to room name automatically
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions authentication requirement and implies destructive operation via 'Delete', but fails to confirm irreversibility, side effects (e.g., notifications to talent), or return value structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three short sentences with zero waste. Front-loaded with the action verb, followed by input flexibility, then authentication constraint. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a two-parameter deletion tool with good schema coverage. Authentication requirement is covered. However, given the lack of annotations and output schema, the description should explicitly state the destructive/permanent nature of the operation and expected return behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear provenance for each parameter (room_name from list_conversations, talent_slug from search_talent). Description reinforces the OR relationship between parameters but adds no semantic detail beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Delete') and resource ('conversation'). Implicitly distinguishes from siblings like get_conversation, start_conversation, and list_conversations through the specific destructive action, though it doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides the critical constraint 'Requires employer authentication' which functions as a when-not guideline. However, lacks explicit guidance on when to prefer this over related messaging tools or prerequisites beyond authentication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_companiesBInspect

Browse remote-friendly companies with optional filtering by country or worldwide availability

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (default: 1)
countryNoFilter companies by country (e.g., 'Canada', 'United States', 'UK')
worldwideNoShow only companies with 100% remote jobs available worldwide (overrides country filter)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. While 'browse' implies a read-only operation, the description does not explicitly confirm safety, idempotency, rate limits, or pagination behavior beyond what the parameter schema states.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, efficient sentence that front-loads the action and resource. No redundant words; every element serves to clarify scope or filtering capability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple listing tool with well-documented parameters, but lacks description of return values (no output schema provided) and omits critical sibling differentiation that would help an agent select the correct tool in a workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds the domain context that these are 'remote-friendly' companies (not explicit in schema) and mentions filtering capabilities, but does not elaborate on parameter syntax or interactions beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Browse' with clear resource 'remote-friendly companies' and mentions available filters. However, it does not differentiate from the sibling 'search_companies' tool, leaving ambiguity about when to list versus search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'search_companies' or 'get_company_details'. No prerequisites, exclusions, or workflow context is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_company_detailsBInspect

Get full details for a company including about, tech stack, benefits, open positions, and social links

ParametersJSON Schema
NameRequiredDescriptionDefault
company_slugYesCompany slug (e.g., 'stripe', 'gitlab')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. While 'Get' implies a safe read operation, the description lacks details about rate limits, authentication requirements, error handling behavior, or the structure/format of the returned data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It front-loads the action and resource, then efficiently lists the specific data categories returned, maximizing information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input and lack of output schema, the description partially compensates by listing the fields returned (about, tech stack, etc.). However, it omits error handling behavior and doesn't clarify whether this is a superset of 'get_company_profile' data, which would help agents select the correct tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (company_slug is well-documented with examples), the baseline score applies. The description adds no additional parameter semantics (syntax, format constraints, validation rules), but none are needed given the comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('company details'), and distinguishes from siblings like 'get_company_profile' and 'get_company_perks' by enumerating specific returned fields (about, tech stack, benefits, open positions, social links). However, it doesn't explicitly contrast with the similar 'get_company_profile' endpoint.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_company_profile' or 'search_companies'. It doesn't mention prerequisites (e.g., needing a valid slug) or error cases (e.g., company not found).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_company_perksAInspect

Get your company's perks/benefits on Himalayas. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the authentication requirement (auth scope), but fails to disclose other behavioral traits such as the return data structure, rate limits, caching behavior, or whether the operation is idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of exactly two high-value sentences: the first establishes purpose and scope, while the second states the authentication requirement. There is no redundant or filler text; every word serves a distinct informational purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (zero parameters) and lack of output schema, the description adequately covers the essential operational context. It identifies the target resource (perks/benefits) and security requirements. A minor gap remains regarding the structure or format of the returned perks data, but this is partially mitigated by the descriptive tool name.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which per the evaluation rules establishes a baseline score of 4. With no parameters to document, there are no semantic gaps to fill between the schema and description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'your company's perks/benefits' with the platform context 'on Himalayas.' The possessive 'your company's' effectively scopes this to the authenticated employer's own data, implicitly distinguishing it from general company search tools like get_companies or get_company_details, though it doesn't explicitly contrast with sibling mutation tools (add/remove_company_perk).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides the critical prerequisite 'Requires employer authentication,' establishing when the tool can be used. However, it lacks explicit guidance on when to select this over similar retrieval tools like get_company_profile or get_company_details, and does not specify what happens if authentication is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_company_profileAInspect

Get your company's profile on Himalayas. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the authentication requirement but omits other behavioral traits such as what data fields are returned, caching behavior, or rate limits. No contradictions with annotations exist since none are provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences. The first states the purpose immediately; the second provides the critical authentication constraint. There is no redundant or wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no input parameters) and lack of output schema, the description is minimally adequate. However, it should ideally describe what profile information is returned (e.g., company name, description, perks) to compensate for the missing output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4 per the rubric. The description correctly implies no filtering is needed (it gets 'your' singular profile), which aligns with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('your company's profile') that clearly indicates this retrieves the authenticated employer's own company data. The possessive 'your' effectively distinguishes it from sibling tool 'get_company_details' (which likely retrieves arbitrary public company data), though it does not explicitly name that sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a critical usage constraint ('Requires employer authentication'), implying when the tool is applicable. However, it lacks explicit guidance on when to use this versus 'get_company_details' or 'get_companies', leaving the agent to infer the distinction based solely on the word 'your'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_conversationAInspect

Get full message history for a conversation. Accepts room_name or talent_slug. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_nameNoRoom name from list_conversations
talent_slugNoTalent slug (from search_talent results) — resolves to room name automatically
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses the authentication requirement and scope ('full' history). However, it fails to mention pagination behavior for large histories, return format, or whether this operation marks messages as read (given the separate mark_message_read sibling exists).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: front-loaded with purpose ('Get full message history...'), followed by input options, and ending with critical auth constraint. Every sentence provides distinct value not redundant with structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a two-parameter tool with complete schema coverage, but gaps remain given the lack of output schema. The description omits what data structure is returned (message objects, plain text, etc.) and how 'full' history is handled for lengthy conversations (pagination, limits).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. The description notes that it 'Accepts room_name or talent_slug,' which reinforces that either identifier suffices, but the schema already documents the source of each parameter ('from list_conversations', 'from search_talent') and their relationship.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get full message history') and resource ('conversation'), distinguishing it from siblings like send_message or delete_conversation. However, it doesn't explicitly differentiate from list_conversations (which lists conversation metadata vs. this tool retrieving content of a specific conversation).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions authentication requirement ('Requires employer authentication') and input options ('Accepts room_name or talent_slug'), providing basic prerequisites. Lacks explicit guidance on when to use this versus list_conversations or when to prefer talent_slug over room_name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_correct_country_nameAInspect

Resolve a country name to the correct format accepted by Himalayas filters. Useful for fuzzy matching user input.

ParametersJSON Schema
NameRequiredDescriptionDefault
country_stringYesThe country name to get the correct name for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that this performs format resolution for Himalayas filters, implying a lookup/normalization behavior. However, it lacks details on error handling (what happens if country not found), rate limits, or whether this is a read-only lookup operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, both earning their place. First sentence establishes purpose and domain (Himalayas filters). Second sentence provides usage context (fuzzy matching). No redundant words or tautology.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter utility tool without output schema, the description is appropriately complete. It explains the transformation (resolution to correct format) and the ecosystem context (Himalayas filters). Minor gap: doesn't specify return type (string vs object), but this is acceptable given the simplicity and lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by specifying 'fuzzy matching user input,' which implies the country_string parameter accepts approximate/imperfect values beyond what the schema's generic description states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Resolve' with specific resource 'country name' and clarifies the scope 'accepted by Himalayas filters.' It clearly distinguishes from siblings like get_jobs or search_companies by positioning itself as a normalization utility rather than a data retrieval tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Useful for fuzzy matching user input' provides clear context on when to use this tool (when handling imprecise country input). It implies this is a preprocessing step before using other Himalayas tools, though it could explicitly mention calling this before job/company searches with country filters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_job_detailsAInspect

Get full details for a specific job including description, requirements, salary, and application link. Use the company_slug and job_slug from job listings.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_slugYesJob slug (from the job listing URL)
company_slugYesCompany slug (from the job listing URL or company page)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the burden of behavioral disclosure. It compensates partially by listing the specific fields returned (description, salary, etc.), addressing the lack of an output schema. However, it fails to explicitly confirm this is a safe, read-only operation or mention error conditions (e.g., job not found).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences with zero waste. The first sentence front-loads the core action and return value specifics; the second provides practical parameter sourcing guidance. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 string parameters, no nested objects) and lack of annotations/output schema, the description adequately covers the essential gaps by enumerating the returned data fields and parameter sources. It could be improved by noting error handling or read-only safety, but it meets the minimum viable threshold for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, establishing a baseline of 3. The description reinforces the schema by repeating that slugs come 'from job listings,' but doesn't add additional semantic value such as format constraints, valid character sets, or lookup behaviors beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('full details for a specific job'), and distinguishes itself from sibling list tools like 'get_jobs' by specifying the exact fields returned (description, requirements, salary, application link). However, it doesn't explicitly differentiate from similar singleton retrieval tools like 'show_company_job'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides prerequisite context ('Use the company_slug and job_slug from job listings'), indicating where to obtain parameter values. However, it lacks explicit guidance on when to use this tool versus siblings like 'get_jobs' or 'show_company_job', or when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_jobsCInspect

Browse the latest remote job listings with optional filtering by country or worldwide availability

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (default: 1)
countryNoFilter jobs by country (e.g., 'Canada', 'United States', 'UK')
worldwideNoShow ONLY 100% remote jobs available worldwide (overrides country filter)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but omits critical behavioral details: it doesn't describe the return format (job objects structure), pagination limits, or the conflict resolution logic between 'country' and 'worldwide' parameters (which the schema notes overrides).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence structure is appropriately front-loaded with the primary action and efficiently packs the key filtering concepts. It avoids redundancy but could be slightly more informative about the tool's list-nature given the missing output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with simple parameters and high schema coverage, the description meets minimum viability by stating the core function. However, gaps remain significant: no output structure is described (critical given no output schema exists), and no differentiation from the 30+ sibling tools (particularly 'search_jobs') is provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description acknowledges the filtering parameters ('country or worldwide availability') but adds no semantic enrichment beyond what the schema already provides, and notably omits mention of the 'page' parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('Browse') and resource ('remote job listings') and specifies the temporal scope ('latest'). However, it fails to explicitly differentiate from sibling tool 'search_jobs', which also retrieves job listings but likely with different query semantics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this browsing tool versus the 'search_jobs' alternative, nor does it mention prerequisites like pagination handling. The agent cannot determine if this is for discovery versus targeted retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_my_profileAInspect

Get your Himalayas profile information. Requires authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses the authentication requirement but omits other behavioral traits: read-only/safe nature, error conditions (e.g., 401 if unauthenticated), or what profile fields are typically returned. Adequate but incomplete behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with the core action ('Get your Himalayas profile information') followed by the constraint ('Requires authentication'). Every word earns its place; no redundancy or tautology.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description should ideally hint at what profile information is returned (e.g., contact details, work history) to help the agent determine if this tool meets its needs. While the tool is simple (zero params), the lack of return value description leaves a gap in contextual completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters (empty properties object). Per guidelines, 0 params = baseline 4. The description correctly does not fabricate parameter documentation, and the 'Get your' phrasing appropriately signals this is a parameterless retrieval of authenticated user data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') + resource ('profile information') + scope ('your Himalayas'). Effectively distinguishes from siblings like get_company_profile (company vs personal), get_talent_profile (talent vs 'my'), and update_profile (read vs write) through the possessive 'your' and action verb.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Requires authentication,' establishing a prerequisite for usage. The possessive 'your' implicitly signals when to use this versus get_company_profile or get_talent_profile. However, lacks explicit contrast with update_profile or guidance on when to prefer this over other profile-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_remote_work_statisticsAInspect

Get remote work statistics: top skills, job categories, industries, or countries by job/company count. Great for understanding the remote work landscape.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoType of breakdown: 'skills', 'categories', 'countries', or 'industries' (default: skills). 'industries' only works with record='companies'.
recordNoWhat to get stats for: 'jobs' or 'companies' (default: jobs)
countryNoFilter stats by country (e.g., 'United States', 'Germany')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It explains what data is returned ('top' items by count) but omits operational details such as rate limits, caching behavior, or the specific response structure. The description relies on the schema to explain the 'industries' parameter constraint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. The first sentence front-loads the action and available breakdown dimensions, while the second provides the value proposition. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with simple enums and no output schema, the description adequately explains the scope of returned data (available dimensions and metrics). It could improve by noting that parameters are optional or describing the ranking logic implied by 'top'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by characterizing the statistics as 'top' items (implying ranking/frequency), which clarifies the aggregation nature not explicitly stated in the schema parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'remote work statistics' for specific dimensions (skills, categories, industries, countries) by count, distinguishing it from sibling tools like get_jobs or get_companies that retrieve individual records rather than aggregates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides a positive use case ('Great for understanding the remote work landscape') but fails to explicitly contrast with siblings like get_jobs or search_jobs, or specify when NOT to use this tool (e.g., when seeking individual job listings rather than aggregated statistics).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_salary_dataAInspect

Get salary benchmarks for remote jobs by job title, with optional seniority and country filters. Returns min, max, and median salary in USD.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoCountry slug to filter by (e.g., 'united-states', 'united-kingdom', 'germany')
job_titleYesJob title to look up salary for (e.g., 'software-engineer', 'product-manager', 'data-scientist'). Use hyphens instead of spaces.
seniorityNoSeniority level (e.g., 'senior', 'junior', 'lead', 'mid')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively compensates by specifying the return structure ('Returns min, max, and median salary in USD'), which is critical given the absence of an output schema. However, it omits other behavioral traits such as rate limiting, caching behavior, or what occurs when no salary data matches the filters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. The first sentence front-loads the core purpose and parameter context, while the second sentence addresses the missing output schema by detailing return values. Every word serves a distinct purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (3 flat parameters) and lack of output schema, the description is appropriately complete. It discloses the return format (min/max/median USD) which compensates for the missing output schema. A score of 5 would require addressing edge cases (e.g., 'returns empty object if no data found'), but it adequately covers the happy path.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, providing detailed explanations for job_title, country, and seniority. The description mentions these as 'filters' which adds minimal framing context, but largely restates what the schema already documents. Since the schema is self-explanatory, the description appropriately does not redundantly elaborate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get salary benchmarks'), resource ('remote jobs'), and filtering capabilities ('by job title, with optional seniority and country filters'). It effectively distinguishes itself from sibling tools like 'get_jobs' or 'get_remote_work_statistics' by focusing specifically on compensation data rather than job listings or general statistics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage context (retrieving salary data), it lacks explicit guidance on when to use this tool versus siblings like 'get_remote_work_statistics' or 'get_job_details'. It does not specify prerequisites, such as whether the job title must exist in a specific format or database, nor does it mention error conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_saved_jobsBInspect

Get all jobs in your application tracker. Requires authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes the authentication requirement but fails to disclose pagination behavior, cache policies, or the structure/format of returned job data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of exactly two sentences with zero redundancy. The first sentence establishes the core function and scope, while the second states the authentication requirement. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and annotations, the description adequately covers the basic operation but leaves gaps regarding return value structure, pagination, and the specific relationship between 'application tracker' jobs and the save/remove operations. Sufficient for a simple getter but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters (100% coverage of empty schema). Per the baseline rule for zero-parameter tools, no additional parameter documentation is required in the description, though the description implicitly confirms no filtering is possible via 'all jobs'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('jobs in your application tracker'), distinguishing it from general job search tools like 'get_jobs' by specifying the personal scope ('your'). However, it could explicitly mention these are 'saved' jobs to better differentiate from sibling tools like 'save_job' or 'remove_saved_job'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'Requires authentication' as a prerequisite, but provides no guidance on when to use this tool versus alternatives like 'get_jobs' or 'search_jobs', nor does it mention the relationship to 'save_job' and 'remove_saved_job' operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_talent_profileAInspect

Get full details for a candidate including bio, all experiences, education, tech stack, social links, and more. Use the talent_slug from search_talent results.

ParametersJSON Schema
NameRequiredDescriptionDefault
talent_slugYesTalent slug (from search_talent results, e.g., 'john-doe')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It effectively discloses return payload contents ('bio, all experiences, education, tech stack, social links'), but omits operational traits like safety guarantees, error cases (e.g., invalid slug), or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences with zero waste: first establishes scope and return data, second provides usage instruction. Perfectly front-loaded and appropriately sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter and lack of output schema, the description adequately compensates by enumerating the returned data fields (bio, experiences, etc.) and explaining the dependency on search_talent. Could mention error handling but sufficient for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with description 'Talent slug (from search_talent results...)'. The description reinforces this by explicitly mentioning 'talent_slug from search_talent results,' adding valuable workflow context beyond the schema's syntax definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with clear resource 'full details for a candidate' and distinguishes from sibling tools by specifying the target domain (candidate bio, experiences, tech stack) and linking to search_talent workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states to 'Use the talent_slug from search_talent results,' providing clear workflow guidance on when/how to invoke this tool relative to its sibling. Lacks explicit 'when not to use' but implies prerequisite of searching first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_company_jobsAInspect

List your company's job postings on Himalayas. Shows status, views, clicks, and expiry. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (default: 1)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses return value characteristics ('Shows status, views, clicks, and expiry') and authentication requirements. It misses explicit safety classification (read-only nature is implied but not stated) and lacks details on pagination behavior or empty result handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: action statement, output fields, and auth requirement. The structure is front-loaded with the primary verb, and every clause delivers distinct value (purpose, return data shape, prerequisites).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter list operation without output schema, the description adequately covers the essential contract: what it lists, what fields are returned, and auth requirements. It could be improved by explicitly stating this is a safe read operation or mentioning default pagination behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the 'page' parameter, the baseline is 3. The description does not add parameter-specific semantics beyond the schema (e.g., total pages available, items per page), but none are required given the comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'List' with clear resource 'your company's job postings on Himalayas'. The possessive 'your company's' effectively distinguishes this from sibling tools like 'get_jobs' (public search) or 'get_job_details' (single job lookup), establishing the employer-scoped context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states 'Requires employer authentication', providing a critical prerequisite. However, it lacks explicit guidance on when to use this versus similar tools like 'show_company_job' (single job view) or 'get_jobs' (public listings), and doesn't mention what happens if the user isn't authenticated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_conversationsAInspect

List all your messaging conversations with last message preview. Shows conversation status (awaiting reply, new reply, read). Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and discloses key behavioral traits: authentication requirements and return data structure (last message preview, status values). However, it omits pagination behavior, rate limits, or error conditions for this list operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero redundancy. The description is front-loaded with the core action, followed by output details and constraints, with every sentence earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters) and lack of output schema, the description adequately compensates by describing the returned conversation data (status types, previews). It meets the minimum viable standard for this tool class, though pagination details would improve it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline of 4. The description appropriately requires no parameter clarification since the schema is trivially complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') with a clear resource ('messaging conversations') and scope ('all'). It effectively distinguishes from the sibling 'get_conversation' (singular retrieval) by emphasizing the plural 'all' and list action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context via the authentication requirement ('Requires employer authentication'), indicating when the tool is applicable. However, it lacks explicit guidance on when to use this versus 'get_conversation' for single-conversation retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mark_message_readAInspect

Mark a message as read by its ID. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_nameNoRoom name from list_conversations
message_idYesMessage ID to mark as read
talent_slugNoTalent slug (from search_talent results) — resolves to room name automatically
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions employer authentication requirement but fails to disclose mutation side effects, idempotency behavior, error cases (e.g., invalid ID), or rate limiting. 'Mark as read' implies state change but lacks operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first states purpose, second states auth requirement. Front-loaded with the core action. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-purpose state-change tool with no output schema, but gaps remain. Does not explain that talent_slug auto-resolves to room_name (though schema does) or specify error behaviors. Sufficient but minimal.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description references 'by its ID' (aligning with required message_id) but does not clarify the relationship between optional room_name and talent_slug parameters or when to use each, leaving that burden entirely on the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Mark') + resource ('message') + mechanism ('by its ID') clearly defines the operation. Distinct from sibling messaging tools like send_message (creation) or get_conversation (retrieval).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States authentication requirement ('Requires employer authentication'), providing a prerequisite constraint. However, lacks explicit guidance on when to use versus siblings (e.g., when to mark read vs. simply retrieving messages) or how to handle the optional room/talent parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

post_job_publicAInspect

Post a job on Himalayas without an account. Provide your email and company details. Payment is required — a Stripe checkout URL will be returned. No authentication needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesJob title (5-80 characters)
seniorityYesSeniority levels
max_salaryNoMaximum salary
skill_listNoComma-separated skills
base_salaryNoMinimum salary
company_urlYesCompany website URL
descriptionYesJob description (350+ characters, can include HTML)
company_nameYesCompany name
category_listNoComma-separated job categories
valid_throughNoExpiration date (ISO format, defaults to 30 days from now)
customer_emailYesYour email address for payment and notifications
salary_countryNoSalary currency code (default: USD)
employment_typeYesEmployment type
app_link_or_emailYesApplication URL or email address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It successfully communicates: (1) no authentication required, (2) payment mandatory, (3) Stripe checkout URL returned (critical since no output schema exists). Missing: whether job publishes immediately or pending payment confirmation, and error handling for failed payments.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: (1) purpose/scope, (2) param hint, (3) payment/return value, (4) auth requirements. Front-loaded with the core action. No redundant phrases or generic filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 14 parameters, zero annotations, no output schema, and a complex payment workflow, the description covers the essential contract: action, auth requirements, payment obligation, and return value type. Minor gap regarding the job publication state machine (immediate vs. post-payment confirmation).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description mentions 'email and company details' but does not add semantic value beyond what schema already documents (e.g., doesn't explain salary ranges, HTML formatting, or comma-separated list formats which are already in schema descriptions).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Post' with resource 'job on Himalayas'. The phrase 'without an account' effectively distinguishes this tool from sibling 'create_company_job', clarifying this is for unauthenticated/anonymous posting versus account-based management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States 'without an account' and 'No authentication needed', clearly indicating when to use this versus authenticated alternatives. Mentions 'Payment is required' as a critical prerequisite. Lacks explicit naming of the alternative tool (create_company_job) for account-holders, which would make the contrast perfect.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

purchase_job_extrasAInspect

Purchase paid extras for an existing job posting: sticky ($199), newsletter ($99). Returns a Stripe checkout URL. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
extrasYesExtras to purchase: 'sticky' ($199 pin to top), 'newsletter' ($99 weekly email feature)
job_slugYesJob slug to add extras to
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses: (1) returns a Stripe checkout URL (indicating external payment flow), (2) requires employer authentication, and (3) exact pricing. Could improve by mentioning idempotency or job state requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: action/scope, specific options with pricing, and return/auth requirements. Front-loaded with the most critical information (what it does and costs) before technical details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Since no output schema exists, the description appropriately documents the return value (Stripe checkout URL). Covers the essential elements for a financial transaction tool: cost, authentication, and return type. Missing only edge case handling (e.g., duplicate purchases).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with detailed descriptions, but the description adds valuable pricing semantics ($199, $99) not present in the schema, helping the agent understand the financial magnitude of the operation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Purchase' with resource 'paid extras for an existing job posting', distinguishing it from sibling tools like create_company_job or update_company_job. It explicitly names the two specific extras (sticky, newsletter) with exact prices, clarifying scope precisely.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context with 'existing job posting' (distinguishing from job creation) and 'Requires employer authentication' (prerequisite). However, lacks explicit when/when-not guidance or named alternatives (e.g., doesn't mention check_job_payment_status for verification).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

remove_company_perkAInspect

Remove a perk/benefit from your company on Himalayas. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesPerk ID to remove (use get_company_perks to find IDs)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the authentication requirement but omits other behavioral traits such as whether the removal is permanent/destructive, rate limits, or side effects on existing job postings.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence front-loads the core action and scope, while the second efficiently states the authentication requirement. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter deletion tool without output schema, the description adequately covers the operation, platform context (Himalayas), and authentication barrier. It could improve by noting the irreversible nature of deletion, but remains sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the 'id' parameter, including the helpful cross-reference to 'get_company_perks'. The description doesn't add parameter-specific semantics beyond the schema, so baseline 3 is appropriate given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Remove), resource (perk/benefit), and scope (company on Himalayas). It effectively distinguishes from sibling 'add_company_perk' through the opposite verb.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides the critical prerequisite 'Requires employer authentication,' establishing who can use this tool. However, it lacks explicit guidance on when not to use it or direct references to sibling alternatives like add_company_perk.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

remove_saved_jobBInspect

Remove a job from your application tracker. Requires authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesKanban item ID to remove
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only mentions the authentication requirement. It fails to clarify whether the operation is permanent, affects the underlying job posting, or is reversible, leaving significant behavioral gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The two-sentence structure is appropriately front-loaded with the core action and contains no redundant or wasted text. Each sentence provides distinct value: the first defines the operation and scope, while the second states the critical auth requirement.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, no output schema, clear schema documentation), the description is minimally viable. However, it lacks necessary behavioral context for a destructive operation, such as confirming that only the saved reference is removed while the job posting remains intact.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage ('Kanban item ID to remove'), adequately documenting the single 'id' parameter. The description does not add parameter-specific semantics, which is acceptable given the schema's completeness, earning the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Remove') and resource ('job from your application tracker'), identifying the scope as personal job tracking. However, it misses the opportunity to explicitly differentiate from sibling 'delete_company_job' or clarify that this is the inverse of 'save_job'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states 'Requires authentication', establishing a necessary prerequisite, and 'your application tracker' implies personal use context. However, it lacks explicit guidance on when to choose this tool versus 'delete_company_job' or confirmation that this unsaves a job rather than deleting the posting itself.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_jobCInspect

Save a job to your application tracker. Requires authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNoPersonal notes about this job
titleYesJob title
statusNoApplication status (default: saved)
app_linkNoApplication link URL
currencyNoSalary currency (e.g., 'USD')
excitementNoExcitement level from 0-5
max_salaryNoMaximum salary
base_salaryNoBase salary
company_nameYesCompany name
himalayas_linkNoHimalayas job listing URL
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description must carry full behavioral burden. Only discloses auth requirement. Fails to specify if this creates new records or updates existing, idempotency behavior, or what constitutes success/failure for a 10-parameter mutation operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero redundancy. Front-loaded with core purpose; auth requirement efficiently appended. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 10-parameter mutation tool with no output schema and no annotations, description is insufficient. Lacks return value documentation, error scenarios, or lifecycle context (e.g., relationship to status updates) despite high parameter complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline. Description adds no parameter-specific guidance (e.g., format expectations for URLs, salary logic), but schema adequacy prevents lower score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (save) and destination (application tracker) clearly. Distinguishes from employer-oriented siblings like 'create_company_job' by framing as personal tracking, though could clarify distinction from 'update_job_status'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions 'Requires authentication' as a prerequisite but provides no guidance on when to use versus siblings like 'update_job_status' or 'remove_saved_job', nor prerequisites beyond auth.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_companiesCInspect

Search for remote-friendly companies using keywords with optional country filtering

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (default: 1)
sortNoSort order for results (default: 'relevant')
countryNoFilter companies by country (e.g., 'Canada', 'United States', 'UK')
keywordNoSearch keyword/term (optional)
benefitsNoComma-separated benefit slugs to filter by (e.g., '401k,health-insurance,equity')
worldwideNoShow only companies with 100% remote jobs available worldwide (overrides country filter)
tech_stackNoComma-separated technology slugs to filter by (e.g., 'react,typescript,python')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It adds the domain context ('remote-friendly') not present in the schema, but omits behavioral details like pagination behavior, rate limits, or that the worldwide flag overrides country filtering.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is appropriately concise and front-loaded with the action verb. However, extreme brevity contributes to incompleteness, as it cannot convey the tool's full capability within one sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 7 optional parameters including complex filters (benefits, tech_stack) and special logic (worldwide overrides country), the description is incomplete. It highlights only 2 of 7 capabilities and provides no output guidance despite the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'keywords' and 'country filtering' which map to specific parameters, but adds no semantic clarity beyond the schema descriptions for the other five parameters (benefits, tech_stack, worldwide, page, sort).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific action ('Search') and resource ('remote-friendly companies') with key mechanisms ('keywords', 'country filtering'). It implies differentiation from sibling get_companies by emphasizing keyword-based searching, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus siblings like get_companies or search_jobs, nor does it mention prerequisites or constraints (e.g., the interaction between country and worldwide parameters).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_jobsCInspect

Search for remote jobs using keywords with optional filtering by country or worldwide availability

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (default: 1)
sortNoSort order for results (default: 'relevant')
typeNoFilter by employment type. Comma-separate for multiple (e.g., 'full-time,contractor')
countryNoFilter jobs by country (e.g., 'Canada', 'United States', 'UK')
keywordNoSearch keyword/term (optional)
marketsNoComma-separated market/category slugs to filter by (e.g., 'saas,fintech,healthcare')
benefitsNoComma-separated benefit slugs to filter by (e.g., '401k,health-insurance,equity')
currencyNoSalary currency (default: USD)
companiesNoComma-separated company slugs to filter by (e.g., 'stripe,gitlab')
worldwideNoShow ONLY 100% remote jobs available worldwide (overrides country filter)
experienceNoFilter by experience/seniority level. Comma-separate for multiple (e.g., 'senior,manager')
salary_maxNoMaximum salary in the specified currency (default USD)
salary_minNoMinimum salary in the specified currency (default USD)
salary_requiredNoIf false, include jobs without salary data in salary-filtered results (default: true — only jobs with salary)
exclude_worldwideNoWhen true and country is set, return ONLY jobs specifically available in that country, excluding worldwide/remote jobs. Requires country to be set.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only delivers the basic action. It omits critical behavioral traits: the override relationship between `worldwide` and `country` filters, pagination behavior, result limits, and what constitutes a valid search response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Search for remote jobs'). While appropriately concise, it may be overly brief given the tool's complexity—failing to signal the 15-parameter richness that would help an agent understand this is a comprehensive search interface.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 15-parameter tool with complex filtering (salary, employment type, experience, companies, benefits), the description is severely incomplete. It mentions only 3 conceptual parameters (keywords, country, worldwide) while ignoring the majority of filtering capabilities, and provides no output guidance despite the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value by grouping parameters conceptually (country/worldwide as 'availability'), but does not explain the rich filtering capabilities (salary ranges, benefits, company filters, experience levels) available in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Search) and resource (remote jobs) and mentions key filtering capabilities (keywords, country, worldwide). However, it fails to distinguish from the sibling tool `get_jobs`, which likely retrieves specific jobs rather than searching, potentially causing agent confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like `get_jobs` or `search_companies`. The description does not mention prerequisites (e.g., whether keyword is required) or recommend specific filter combinations, leaving the agent without selection heuristics.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_talentCInspect

Search for remote talent/candidates on Himalayas. Browse professionals by keyword, country, or role.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number for pagination (default: 1)
sortNoSort order (default: relevant)
countryNoFilter by country (e.g., 'United States', 'Germany')
keywordNoSearch keyword (e.g., 'react developer')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention pagination behavior (despite the 'page' parameter), rate limits, authentication requirements, or what the tool returns (profiles, IDs, or summaries). It only identifies the target platform (Himalayas).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences and appropriately brief, but contains redundancy ('talent/candidates' and 'Search...Browse'). The information is front-loaded with the primary action, though the second sentence partially restates the first without adding significant new operational context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description should explain what data structure is returned and how pagination works. It fails to do so, leaving critical gaps for an agent trying to understand the full operation of this search tool with 4 optional parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions filtering by 'keyword, country, or role', though there is no 'role' parameter in the schema (only keyword, country, page, sort), which creates minor confusion. It does not add syntax details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for 'remote talent/candidates' on 'Himalayas' using specific verbs ('Search', 'Browse'). It effectively distinguishes from siblings like search_jobs and search_companies by specifying the talent/candidate resource, though it doesn't explicitly contrast with get_talent_profile for specific profile retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists available filters (keyword, country) but provides no guidance on when to use this versus get_talent_profile for specific lookups, nor does it mention that all parameters are optional or how to handle paginated results. No alternative tools or exclusion criteria are named.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_messageAInspect

Send a message in an existing conversation. Accepts room_name (from list_conversations) or talent_slug (from search_talent). Use start_conversation first if no conversation exists yet. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
messageYesMessage text to send
room_nameNoRoom name from list_conversations
talent_slugNoTalent slug (from search_talent results) — resolves to room name automatically
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses critical behavioral traits: 'Requires employer authentication' (auth requirements), constraint that conversation must exist (implied failure mode), and that talent_slug 'resolves to room name automatically' (internal behavior). Minor gap on error handling or return values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each earning its place: purpose definition, parameter specification, workflow prerequisite, and authentication requirement. Front-loaded with core action, zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple schema (3 flat parameters) and lack of output schema, description adequately covers the operational complexity (authentication, prerequisite conversations, parameter relationships). Minor gap on describing the return value or success confirmation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3). Description adds significant value by specifying data provenance for room_name and talent_slug, and explaining the automatic resolution behavior between talent_slug and room_name, helping users understand which parameter to use when.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states specific action ('Send a message'), target resource ('existing conversation'), and distinguishes from sibling tool start_conversation by emphasizing 'existing' vs new conversations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Use start_conversation first if no conversation exists yet' names the alternative tool. Also specifies parameter sources ('from list_conversations', 'from search_talent') clarifying prerequisite API calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

show_company_jobAInspect

Get full details of one of your company's job postings on Himalayas. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_slugYesJob slug (from list_company_jobs)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Successfully discloses authentication requirement, but omits other behavioral traits: read-only nature (implied by 'Get' but not explicit), error handling for invalid slugs, or return format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 12 words, front-loaded with action verb. Zero redundancy. Every word earns its place by conveying operation type, resource, ownership scope, and authentication constraint.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter retrieval tool with complete schema coverage. However, lacks output schema description, leaving agents unaware of what job details fields are returned. Given no annotations and no output schema, this is a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage (job_slug described in schema). Description adds no parameter details, but with complete schema coverage, baseline 3 is appropriate. The schema description linking to list_company_jobs provides sufficient context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'company job postings' + scope 'one of your company's' effectively distinguishes from sibling get_job_details (public) and list_company_jobs (plural). The phrase 'your company's' is critical for identifying this as an employer-specific view.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States authentication requirement ('Requires employer authentication'), which defines who can invoke it. However, lacks explicit guidance on when to use this versus get_job_details or list_company_jobs, though the possessive 'your company's' implies the use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

start_conversationAInspect

Start a conversation with a candidate by talent slug. Optionally send an initial message. If the conversation already exists, returns it. Use search_talent to find candidates first, then use their slug here. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
messageNoOptional initial message to send with the conversation
talent_slugYesTalent slug of the candidate to message (from search_talent results)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It successfully discloses the idempotency pattern and authentication requirement, but omits details about the return value structure, side effects (e.g., notifications triggered), or error states that would be expected for a mutation tool without annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five compact sentences deliver purpose, optional parameter behavior, idempotency logic, workflow prerequisites, and authentication requirements without redundancy. Every clause earns its place and the information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, flat schema) and lack of output schema, the description adequately covers the critical gaps: authentication needs and idempotency. Minor gap remains in not describing the return value structure, though it does confirm a conversation object is returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is appropriately met. The description adds workflow context that the talent_slug comes from search_talent results, but does not add syntax, format constraints, or examples beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the specific action (start a conversation), target resource (candidate), and key identifier (talent slug). It clearly distinguishes from sibling tools like 'send_message' by emphasizing this initiates the thread and notably clarifies idempotent behavior ('returns it' if exists) that differentiates it from a pure create operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance ('Use search_talent to find candidates first'), states prerequisites ('Requires employer authentication'), and clarifies the conditional outcome when the conversation already exists. This gives the agent clear context on when to select this tool versus search_talent or send_message.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_company_jobAInspect

Update an existing job posting on Himalayas. Only provide fields you want to change. For screening_questions, provide the full set — questions not included will be removed. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
draftNoSave as draft instead of submitting for approval
titleNoJob title (5-80 characters)
job_slugYesJob slug to update
seniorityNoSeniority levels
max_salaryNoMaximum salary
skill_listNoComma-separated skills
base_salaryNoMinimum salary
descriptionNoJob description (350+ characters, can include HTML)
category_listNoComma-separated job categories
valid_throughNoExpiration date (ISO format)
salary_countryNoSalary currency code
employment_typeNoEmployment type
app_link_or_emailNoApplication URL or email address
screening_questionsNoScreening questions — provide full set, questions not included will be removed
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. Discloses partial-update semantics, destructive replacement behavior for screening_questions array, and employer authentication requirement. Missing error behavior or return value details, but covers primary mutation risks.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, zero waste. Front-loaded with purpose, followed by operational guidance, specific parameter warning, and auth requirement. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 14-parameter mutation tool with nested objects and no annotations/output schema, description adequately covers the primary gotchas (partial updates, screening_questions replacement, auth). Would benefit from mentioning error cases (e.g., invalid job_slug) or return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3). Description adds crucial PATCH semantic context ('Only provide fields you want to change') explaining how to use the 13 optional parameters collectively. Also reinforces the replace-all behavior for screening_questions array.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Update an existing job posting on Himalayas' with specific verb and resource. Clearly distinguishes from sibling tools create_company_job and delete_company_job through the 'update' action and 'existing' qualifier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear partial-update guidance ('Only provide fields you want to change') and critical screening_questions behavior ('provide the full set'). Includes authentication prerequisite. Lacks explicit comparison to create_company_job for when to use each.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_company_profileAInspect

Update your company's profile on Himalayas. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
ceoNoCEO name
aboutNoCompany description (HTML or plain text)
summaryNoShort company summary
twitterNoTwitter URL
facebookNoFacebook URL
linkedinNoLinkedIn URL
instagramNoInstagram URL
year_foundedNoYear the company was founded
location_listNoComma-separated list of locations (e.g., 'San Francisco, New York')
num_employees_rangeNoNumber of employees range
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes the authentication requirement but omits critical mutation semantics—specifically whether updates are partial (PATCH-like) or destructive replacement, which is crucial given all 10 parameters are optional.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero redundancy. The first sentence establishes purpose immediately; the second provides the authentication constraint. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 optional parameters) and lack of output schema, the description is minimally adequate but incomplete. It should clarify the partial update behavior (implied by optional params but not guaranteed) and any validation constraints on fields like HTML content in the 'about' parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear field definitions, establishing a baseline of 3. The description does not enumerate specific parameters, but this is acceptable since the schema comprehensively documents the 10 fields (CEO, about, social URLs, etc.).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Update), resource (company's profile), and scope (on Himalayas). It effectively distinguishes from siblings like update_company_job, update_profile, and update_company_tech_stack by focusing specifically on the company profile entity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides the prerequisite 'Requires employer authentication,' indicating when the tool is applicable. However, it lacks explicit guidance on when to use this versus read-only alternatives like get_company_profile, and does not clarify whether this replaces the entire profile or supports partial updates.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_company_tech_stackAInspect

Update your company's tech stack on Himalayas. Pass technology names (e.g., 'React', 'Python') and they'll be matched to the Himalayas stack database. Requires employer authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
stacksYesArray of technology names (e.g., ['React', 'TypeScript', 'Python', 'AWS'])
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses external matching behavior ('matched to the Himalayas stack database') and authentication needs. However, omits critical behavioral traits: whether this replaces the entire stack or appends, and what happens if technologies aren't found in the database.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently structured: purpose, parameter guidance, and authentication requirement. No redundant information; every sentence provides distinct value beyond the schema and metadata.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool with no output schema. Covers primary operation, parameter usage, and auth. However, missing clarification on stack replacement semantics and error handling for unmatched technologies.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured schema already documents the 'stacks' parameter. Description reinforces with consistent examples ('React', 'Python') but adds no additional semantic depth regarding format constraints or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Update) and resource (company's tech stack) with platform context (Himalayas). However, fails to differentiate from sibling tool 'update_tech_stack', creating potential ambiguity about which to use for company vs. profile updates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies authentication requirement ('Requires employer authentication') implying when to use. However, lacks explicit guidance on when not to use vs sibling 'update_tech_stack' or behavior regarding invalid technology names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_job_statusCInspect

Update a saved job's status or details. Requires authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesKanban item ID
notesNoUpdated notes
statusNoNew application status
excitementNoExcitement level from 0-5
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Only discloses authentication requirement. Fails to mention partial update behavior (only id required), error handling for invalid IDs, return value structure, or idempotency characteristics for this mutation operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with purpose ('Update...'), followed by prerequisite ('Requires authentication'). Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter mutation tool with no annotations and no output schema, the description is insufficient. Missing: return value description, partial update semantics, relationship to job application workflow, and error scenarios. Over-relies on schema to carry meaning.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all 4 parameters well-documented: id, notes, status enum, excitement range). Description loosely maps 'status or details' to parameters but adds no semantic context beyond schema (e.g., 'only changed fields needed', 'status progression flow'). Baseline 3 appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Update' and resource 'saved job's status or details', distinguishing it from sibling 'update_company_job' by specifying 'saved job'. However, it doesn't explicitly clarify the workflow distinction from 'save_job' (create) versus this update operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Only mentions 'Requires authentication' as a prerequisite. Provides no guidance on when to use this versus 'save_job' or 'remove_saved_job', and doesn't indicate this is for tracking application progress on existing saved jobs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_profileCInspect

Update your Himalayas profile. Requires authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
bioNoShort bio
introNoIntroduction/headline
locationNoLocation (e.g., 'San Francisco, CA')
career_max_salaryNoDesired maximum salary
career_base_salaryNoDesired minimum salary
career_descriptionNoDescription of what you're looking for
career_primary_roleNoPrimary role (e.g., 'software-engineer')
career_search_statusNoCareer search status (e.g., 'actively_searching', 'open_to_roles', 'closed_to_roles')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it mentions authentication requirements, it omits critical mutation behaviors: partial update semantics (PATCH-like), idempotency characteristics, side effects, or return value structure. The agent cannot infer what happens when the profile doesn't exist or which fields are mandatory.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two short, front-loaded sentences with no redundant or wasted language. However, the brevity borders on under-specification given the tool's complexity (8 parameters, mutation behavior).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having 8 optional parameters and no output schema or annotations, the description lacks essential context. It doesn't explain the partial-update behavior implied by zero required fields, doesn't describe the return value, and doesn't clarify the distinction between this and the 40+ sibling tools, particularly 'update_company_profile'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (e.g., 'Short bio', 'Desired minimum salary'), so the baseline is 3. The description provides no additional parameter context beyond the generic word 'profile', but it doesn't need to compensate given the comprehensive schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Update') and resource ('your Himalayas profile'), identifying this as a personal profile mutation. The possessive 'your' implicitly distinguishes it from the sibling tool 'update_company_profile', though it could be more explicit about targeting the authenticated user's talent profile versus company profiles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description only notes 'Requires authentication' as a constraint. It fails to specify when to use this tool versus 'update_company_profile', doesn't mention the partial-update pattern (all parameters optional), and provides no guidance on prerequisites or error conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_tech_stackBInspect

Update your tech stack on your Himalayas profile. Requires authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
stacksYesArray of technology names (e.g., ['React', 'TypeScript', 'Node.js'])
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions authentication requirements but fails to specify the mutation semantics (replace vs. merge), error conditions, or what constitutes a successful response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. It front-loads the core action and follows with the authentication requirement, making it appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter schema with complete coverage, the description is minimally adequate. However, it lacks critical context regarding sibling differentiation and mutation behavior that would be necessary for an agent to use this tool confidently without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear examples ('React', 'TypeScript'). Since the schema fully documents the `stacks` parameter, the description does not need to add parameter details, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Update), resource (tech stack), and scope (your Himalayas profile). However, it fails to explicitly distinguish from the sibling tool `update_company_tech_stack`, which could cause selection confusion given the similar naming.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description only notes 'Requires authentication' as a prerequisite. It provides no guidance on when to use this versus `update_profile` or `update_company_tech_stack`, and does not clarify whether this replaces existing tech stacks or appends to them.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources