CompShop
Server Details
Search 350+ compensation surveys by industry, region, or job title. Independent directory.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- mkibrick/compshop
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 7 of 7 tools scored.
Each tool has a distinct purpose, but the 'search' tool could potentially return similar results to 'find_surveys_for_position' or 'recommend_surveys'. However, the descriptions clearly differentiate the use cases, so confusion is minimal.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., find_surveys_for_position, list_vendors_by_industry, recommend_surveys), making the naming predictable and clear.
With 7 tools, the server is well-scoped for a compensation survey directory, covering discovery, filtering, recommendations, and details without overloading or underproviding functionality.
The tool set covers all major operations for the domain: free-text search, filtered listing (by industry, region, position), recommendations, and detailed retrieval. No obvious gaps in the user needs for survey discovery and exploration.
Available Tools
7 toolsfind_surveys_for_positionAInspect
Find compensation surveys that benchmark a specific job title or position (e.g. 'Software Engineer', 'Director of Finance', 'Registered Nurse'). Returns matching positions and the surveys that cover them.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max position matches to return (default 5, max 20) | |
| position | Yes | Job title or position to look up (free text). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and description does not disclose behavioral traits such as read-only nature, rate limits, or side effects. Only states return type, which is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no extraneous information. First sentence defines action and input examples, second sentence explains output. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately explains return value. Simple tool with few parameters; description covers all necessary aspects for an AI to select and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. Description adds context by explaining the purpose ('benchmark') and return value ('matching positions and the surveys that cover them'), enhancing understanding beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Find' and clearly identifies the resource (compensation surveys) and the input (job title/position). It distinguishes from siblings like 'search' and 'recommend_surveys' by focusing on benchmarking a specific position.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies when to use (when you have a job title to benchmark) but provides no explicit guidance on alternatives or when not to use. No comparison with sibling tools like 'search' or 'recommend_surveys'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_reportAInspect
Detailed info on a single survey report by slug.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Report slug (e.g. 'pas-aggregates-industry'). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states it returns 'detailed info', which is somewhat transparent, but without annotations, more detail on the nature of 'detailed info' or potential side effects would be beneficial. No contradictions with missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no extraneous words, and the core purpose is front-loaded. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With one parameter, no output schema, and no annotations, the description provides minimal context. It is adequate but could detail what 'detailed info' includes or mention the read-only nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds an example slug ('pas-aggregates-industry'), which clarifies the param format beyond the schema's type and description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'get' and resource 'detailed info on a single survey report', with the method 'by slug', distinguishing it from sibling tools like list_vendors_by_industry or find_surveys_for_position.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. It implies the user needs a specific slug, but doesn't mention when not to use or provide context for selecting this over sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vendorAInspect
Detailed info on a specific vendor by slug, including all of their reports. Use after search or list_vendors_by_industry returns a candidate.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Vendor slug (e.g. 'mercer-benchmark-database', 'pas', 'wtw'). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It mentions that the tool includes 'all of their reports', which is a behavioral trait, but does not disclose other aspects like whether it is read-only or any side effects. This is adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys purpose and usage. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter, no output schema, and low complexity, the description is fairly complete. It explains what the tool does and when to use it. However, it could briefly mention the return format or structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage with a clear description of the 'slug' parameter. The description's mention of 'by slug' adds no extra meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'detailed info on a specific vendor by slug, including all of their reports', which is a specific verb+resource. It also distinguishes from sibling tools by noting it is used after search or list operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use after `search` or `list_vendors_by_industry` returns a candidate', providing clear context for when to use this tool. However, it does not explicitly mention when not to use it or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_vendors_by_industryAInspect
List every CompShop vendor that publishes surveys for a given industry. Use when the user asks 'what survey publishers cover [industry]?'
| Name | Required | Description | Default |
|---|---|---|---|
| industry | Yes | Industry category. One of: general-industry, healthcare, life-sciences, tech, media, financial-services, insurance, energy, construction, retail, higher-ed, legal, nonprofit, executive, free. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It does not disclose any behavioral traits like pagination, rate limits, filtering nuances, or what 'every' means. The agent lacks key details for reliable invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tight sentences with the action front-loaded and a usage example. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 1-parameter tool with no output schema, the description covers purpose and usage. Missing behavioral context (e.g., pagination) is a minor gap given the simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and includes a description for 'industry'. The tool description adds no new parameter-level meaning beyond the schema, justifying a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (list), the resource (vendors), and the filter (by industry that publish surveys). It also provides an example query, distinguishing it from siblings like list_vendors_by_region.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use when the user asks...'. Does not mention alternatives or when not to use, but the positive guidance is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_vendors_by_regionAInspect
List vendors with survey coverage in a given region. Use when the user asks 'what publishers cover [region]?' Region is matched against both vendor-level scope and individual report scopes.
| Name | Required | Description | Default |
|---|---|---|---|
| region | Yes | One of: United States, Canada, United Kingdom, Europe, Asia Pacific, Latin America, Middle East & Africa, Global. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that region matching applies at both vendor and report scopes, but does not mention read-only nature, auth requirements, or return format. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. Information is front-loaded and directly addresses purpose and usage. Excellent conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers purpose, usage, and matching behavior. It is nearly complete but could mention the return type (list of vendors) for better completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single enum parameter. The description adds value by explaining that region is matched against both vendor-level and individual report scopes, providing context beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'List vendors with survey coverage in a given region' with a specific verb and resource, and provides an example use case ('when the user asks 'what publishers cover [region]?''). It implicitly distinguishes from sibling 'list_vendors_by_industry' by focusing on region.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when the user asks...' which provides clear context. It also explains matching behavior (vendor-level and report scopes). It does not mention alternatives or when not to use, but the guidance is sufficient for the simple use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recommend_surveysAInspect
Recommend the best-fit compensation surveys given a hiring/benchmarking context. Use when the user asks 'what survey should I use for [situation]?' Returns ranked vendors with rationale. Required: industry. Optional: region, role focus.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max recommendations (default 5, max 10) | |
| region | No | Optional. One of: United States, Canada, United Kingdom, Europe, Asia Pacific, Latin America, Middle East & Africa, Global. | |
| industry | Yes | Primary industry. One of: general-industry, healthcare, life-sciences, tech, media, financial-services, insurance, energy, construction, retail, higher-ed, legal, nonprofit, executive, free. | |
| role_focus | No | Optional free-text describing the role types being benchmarked (e.g. 'software engineers', 'physicians', 'sales reps', 'CEO and C-suite'). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must disclose behavioral traits. It states the tool returns ranked vendors with rationale, implying a read-only operation. However, it does not mention auth needs, rate limits, or side effects. The lack of contradiction is neutral, but more detail would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences) and front-loaded with the core purpose. Every sentence adds value: purpose, usage guidance, and parameter requirements. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters, no output schema, and no annotations, the description covers the main aspects: function, usage context, and key parameters. It could be improved by detailing the output format or differentiating from sibling tools like find_surveys_for_position, but it is largely sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema provides full descriptions for all parameters (100% coverage). The description adds a summary of required/optional but does not add new meaning beyond the schema. Since the schema is rich, a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to recommend best-fit compensation surveys given a hiring/benchmarking context. It uses a specific verb ('recommend') and resource ('compensation surveys'), and the context of use is distinct from sibling tools like list_vendors_by_industry.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use guidance ('Use when the user asks...') and highlights required (industry) and optional (region, role focus) parameters. It does not explicitly state when not to use or mention alternatives, but the context is clear enough for an agent to decide.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchAInspect
Free-text search across the CompShop directory of compensation surveys. Returns matching vendors, reports, job families, and positions. Use for open-ended discovery questions like 'biotech surveys in Europe' or 'CEO compensation data'.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results per group (default 5, max 15) | |
| query | Yes | Free-text query (job title, industry, vendor, geography, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description adds return types and example queries but lacks details on auth, rate limits, or side effects. Adequate for a read-only search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each valuable: first defines scope, second provides usage examples. No filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with good parameter descriptions and sibling tools, explains purpose and usage well. Lacks return format details but acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%; description adds context for 'query' (e.g., job title, industry) beyond schema. 'Limit' description matches schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it performs free-text search across the CompShop directory and lists return types (vendors, reports, job families, positions). Distinguishes from sibling tools that are more specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage examples for open-ended queries, implying not to use for targeted filters. Does not explicitly list alternatives but context suggests them.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!