Skip to main content
Glama

hr-compensation-mcp-server

Server Details

H1B visa salary disclosures + compensation benchmarks — real numbers, not estimates.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 2 of 2 tools scored.

Server CoherenceA
Disambiguation4/5

Both tools search salary data, but one is specifically for H1B visa salaries with employer details, while the other provides general salary ranges by location. Their purposes are distinct, though there is some domain overlap.

Naming Consistency5/5

Both tools follow a consistent 'search_' prefix followed by a descriptive noun, forming a clear pattern.

Tool Count3/5

With only two tools, the server feels sparse for a compensation domain. However, the tools cover two specific areas (H1B and general salaries), which may be acceptable for a narrow focus.

Completeness2/5

The server lacks tools for comparisons, historical trends, or company-specific searches, leaving significant gaps for comprehensive compensation analysis.

Available Tools

2 tools
search_h1b_salariesA
Read-only
Inspect

Search the U.S. H1B visa salary database for sponsored employment data. Returns employer name, job title, approved salary, visa year, work location (city/state), and visa status. Use for understanding visa compensation trends, benchmarking tech salaries, or researching employer sponsorship patterns.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyNoCompany name or partial name (e.g. 'Google', 'Meta', 'Apple')
locationNoWork location as city or state (e.g. 'San Francisco, CA', 'Seattle, WA', 'New York')
job_titleNoJob title to search (e.g. 'Software Engineer', 'Data Scientist', 'Product Manager')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds value by detailing the exact fields returned and the context of sponsored employment data, without contradicting annotations. No destructive behavior is indicated, and the description aligns with read-only expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loaded with the action and resource, and every word serves a purpose. No redundant or extraneous information is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (3 optional params, no output schema), the description covers the tool's purpose, return fields, and use cases. However, it lacks details on pagination, result limits, or explicit differentiation from the sibling tool, which could be useful for a fully complete picture.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage and each parameter having a description, the description adds little beyond stating the overall purpose and return fields. The baseline of 3 is appropriate as the schema already documents the parameters adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as searching the U.S. H1B visa salary database for sponsored employment data, listing specific return fields (employer name, job title, salary, etc.) and distinguishing itself from the sibling tool 'search_salaries' by focusing exclusively on H1B visa data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides use cases (understanding visa compensation trends, benchmarking tech salaries, researching sponsorship patterns) but does not explicitly state when to use this tool versus the sibling 'search_salaries' or when not to use it, leaving the differentiation implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_salariesA
Read-only
Inspect

Query general salary data by job title and geographic location. Returns average salary, salary range, number of data points, and median compensation. Use for career planning, negotiation benchmarking, or compensation analysis across roles and regions.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationNoGeographic location for salary lookup (e.g. 'San Francisco, CA', 'remote', 'United States')
job_titleYesJob position or role (e.g. 'Senior Software Engineer', 'UX Designer', 'DevOps Engineer')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint. Description adds return value specifics (average, range, count, median) without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences front-loaded with action, each sentence serving a distinct purpose (what, output, usage). No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers tool purpose, return fields (absent output schema), and usage context. Still missing minor details like data recency or sort order, but adequate for a simple query.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the description merely restates the purpose of parameters without adding new semantic details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it queries salary data by job title and location, returning specific metrics. Differentiates from sibling 'search_h1b_salaries' by being general salary data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases (career planning, benchmarking, analysis) and implies alternative via sibling tool name, but lacks explicit when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources