hr-compensation-mcp-server
Server Details
H1B visa salary disclosures + compensation benchmarks — real numbers, not estimates.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 2 of 2 tools scored.
Both tools search salary data, but one is specifically for H1B visa salaries with employer details, while the other provides general salary ranges by location. Their purposes are distinct, though there is some domain overlap.
Both tools follow a consistent 'search_' prefix followed by a descriptive noun, forming a clear pattern.
With only two tools, the server feels sparse for a compensation domain. However, the tools cover two specific areas (H1B and general salaries), which may be acceptable for a narrow focus.
The server lacks tools for comparisons, historical trends, or company-specific searches, leaving significant gaps for comprehensive compensation analysis.
Available Tools
2 toolssearch_h1b_salariesARead-onlyInspect
Search the U.S. H1B visa salary database for sponsored employment data. Returns employer name, job title, approved salary, visa year, work location (city/state), and visa status. Use for understanding visa compensation trends, benchmarking tech salaries, or researching employer sponsorship patterns.
| Name | Required | Description | Default |
|---|---|---|---|
| company | No | Company name or partial name (e.g. 'Google', 'Meta', 'Apple') | |
| location | No | Work location as city or state (e.g. 'San Francisco, CA', 'Seattle, WA', 'New York') | |
| job_title | No | Job title to search (e.g. 'Software Engineer', 'Data Scientist', 'Product Manager') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds value by detailing the exact fields returned and the context of sponsored employment data, without contradicting annotations. No destructive behavior is indicated, and the description aligns with read-only expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the action and resource, and every word serves a purpose. No redundant or extraneous information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (3 optional params, no output schema), the description covers the tool's purpose, return fields, and use cases. However, it lacks details on pagination, result limits, or explicit differentiation from the sibling tool, which could be useful for a fully complete picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage and each parameter having a description, the description adds little beyond stating the overall purpose and return fields. The baseline of 3 is appropriate as the schema already documents the parameters adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as searching the U.S. H1B visa salary database for sponsored employment data, listing specific return fields (employer name, job title, salary, etc.) and distinguishing itself from the sibling tool 'search_salaries' by focusing exclusively on H1B visa data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides use cases (understanding visa compensation trends, benchmarking tech salaries, researching sponsorship patterns) but does not explicitly state when to use this tool versus the sibling 'search_salaries' or when not to use it, leaving the differentiation implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_salariesARead-onlyInspect
Query general salary data by job title and geographic location. Returns average salary, salary range, number of data points, and median compensation. Use for career planning, negotiation benchmarking, or compensation analysis across roles and regions.
| Name | Required | Description | Default |
|---|---|---|---|
| location | No | Geographic location for salary lookup (e.g. 'San Francisco, CA', 'remote', 'United States') | |
| job_title | Yes | Job position or role (e.g. 'Senior Software Engineer', 'UX Designer', 'DevOps Engineer') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint. Description adds return value specifics (average, range, count, median) without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences front-loaded with action, each sentence serving a distinct purpose (what, output, usage). No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers tool purpose, return fields (absent output schema), and usage context. Still missing minor details like data recency or sort order, but adequate for a simple query.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the description merely restates the purpose of parameters without adding new semantic details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it queries salary data by job title and location, returning specific metrics. Differentiates from sibling 'search_h1b_salaries' by being general salary data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases (career planning, benchmarking, analysis) and implies alternative via sibling tool name, but lacks explicit when-not-to-use instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!