Skip to main content
Glama

Server Details

Search 350+ compensation surveys by industry, region, or job title. Independent directory.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
mkibrick/compshop
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation4/5

Each tool has a distinct purpose, but the 'search' tool could potentially return similar results to 'find_surveys_for_position' or 'recommend_surveys'. However, the descriptions clearly differentiate the use cases, so confusion is minimal.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case (e.g., find_surveys_for_position, list_vendors_by_industry, recommend_surveys), making the naming predictable and clear.

Tool Count5/5

With 7 tools, the server is well-scoped for a compensation survey directory, covering discovery, filtering, recommendations, and details without overloading or underproviding functionality.

Completeness5/5

The tool set covers all major operations for the domain: free-text search, filtered listing (by industry, region, position), recommendations, and detailed retrieval. No obvious gaps in the user needs for survey discovery and exploration.

Available Tools

7 tools
find_surveys_for_positionAInspect

Find compensation surveys that benchmark a specific job title or position (e.g. 'Software Engineer', 'Director of Finance', 'Registered Nurse'). Returns matching positions and the surveys that cover them.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax position matches to return (default 5, max 20)
positionYesJob title or position to look up (free text).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and description does not disclose behavioral traits such as read-only nature, rate limits, or side effects. Only states return type, which is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no extraneous information. First sentence defines action and input examples, second sentence explains output. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description adequately explains return value. Simple tool with few parameters; description covers all necessary aspects for an AI to select and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. Description adds context by explaining the purpose ('benchmark') and return value ('matching positions and the surveys that cover them'), enhancing understanding beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Find' and clearly identifies the resource (compensation surveys) and the input (job title/position). It distinguishes from siblings like 'search' and 'recommend_surveys' by focusing on benchmarking a specific position.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies when to use (when you have a job title to benchmark) but provides no explicit guidance on alternatives or when not to use. No comparison with sibling tools like 'search' or 'recommend_surveys'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_reportAInspect

Detailed info on a single survey report by slug.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesReport slug (e.g. 'pas-aggregates-industry').
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description states it returns 'detailed info', which is somewhat transparent, but without annotations, more detail on the nature of 'detailed info' or potential side effects would be beneficial. No contradictions with missing annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no extraneous words, and the core purpose is front-loaded. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With one parameter, no output schema, and no annotations, the description provides minimal context. It is adequate but could detail what 'detailed info' includes or mention the read-only nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds an example slug ('pas-aggregates-industry'), which clarifies the param format beyond the schema's type and description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'get' and resource 'detailed info on a single survey report', with the method 'by slug', distinguishing it from sibling tools like list_vendors_by_industry or find_surveys_for_position.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. It implies the user needs a specific slug, but doesn't mention when not to use or provide context for selecting this over sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_vendorAInspect

Detailed info on a specific vendor by slug, including all of their reports. Use after search or list_vendors_by_industry returns a candidate.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesVendor slug (e.g. 'mercer-benchmark-database', 'pas', 'wtw').
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It mentions that the tool includes 'all of their reports', which is a behavioral trait, but does not disclose other aspects like whether it is read-only or any side effects. This is adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys purpose and usage. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one parameter, no output schema, and low complexity, the description is fairly complete. It explains what the tool does and when to use it. However, it could briefly mention the return format or structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% coverage with a clear description of the 'slug' parameter. The description's mention of 'by slug' adds no extra meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'detailed info on a specific vendor by slug, including all of their reports', which is a specific verb+resource. It also distinguishes from sibling tools by noting it is used after search or list operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use after `search` or `list_vendors_by_industry` returns a candidate', providing clear context for when to use this tool. However, it does not explicitly mention when not to use it or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_vendors_by_industryAInspect

List every CompShop vendor that publishes surveys for a given industry. Use when the user asks 'what survey publishers cover [industry]?'

ParametersJSON Schema
NameRequiredDescriptionDefault
industryYesIndustry category. One of: general-industry, healthcare, life-sciences, tech, media, financial-services, insurance, energy, construction, retail, higher-ed, legal, nonprofit, executive, free.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It does not disclose any behavioral traits like pagination, rate limits, filtering nuances, or what 'every' means. The agent lacks key details for reliable invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tight sentences with the action front-loaded and a usage example. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 1-parameter tool with no output schema, the description covers purpose and usage. Missing behavioral context (e.g., pagination) is a minor gap given the simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and includes a description for 'industry'. The tool description adds no new parameter-level meaning beyond the schema, justifying a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (list), the resource (vendors), and the filter (by industry that publish surveys). It also provides an example query, distinguishing it from siblings like list_vendors_by_region.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use when the user asks...'. Does not mention alternatives or when not to use, but the positive guidance is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_vendors_by_regionAInspect

List vendors with survey coverage in a given region. Use when the user asks 'what publishers cover [region]?' Region is matched against both vendor-level scope and individual report scopes.

ParametersJSON Schema
NameRequiredDescriptionDefault
regionYesOne of: United States, Canada, United Kingdom, Europe, Asia Pacific, Latin America, Middle East & Africa, Global.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that region matching applies at both vendor and report scopes, but does not mention read-only nature, auth requirements, or return format. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. Information is front-loaded and directly addresses purpose and usage. Excellent conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers purpose, usage, and matching behavior. It is nearly complete but could mention the return type (list of vendors) for better completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single enum parameter. The description adds value by explaining that region is matched against both vendor-level and individual report scopes, providing context beyond the schema definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'List vendors with survey coverage in a given region' with a specific verb and resource, and provides an example use case ('when the user asks 'what publishers cover [region]?''). It implicitly distinguishes from sibling 'list_vendors_by_industry' by focusing on region.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use when the user asks...' which provides clear context. It also explains matching behavior (vendor-level and report scopes). It does not mention alternatives or when not to use, but the guidance is sufficient for the simple use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_surveysAInspect

Recommend the best-fit compensation surveys given a hiring/benchmarking context. Use when the user asks 'what survey should I use for [situation]?' Returns ranked vendors with rationale. Required: industry. Optional: region, role focus.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax recommendations (default 5, max 10)
regionNoOptional. One of: United States, Canada, United Kingdom, Europe, Asia Pacific, Latin America, Middle East & Africa, Global.
industryYesPrimary industry. One of: general-industry, healthcare, life-sciences, tech, media, financial-services, insurance, energy, construction, retail, higher-ed, legal, nonprofit, executive, free.
role_focusNoOptional free-text describing the role types being benchmarked (e.g. 'software engineers', 'physicians', 'sales reps', 'CEO and C-suite').
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must disclose behavioral traits. It states the tool returns ranked vendors with rationale, implying a read-only operation. However, it does not mention auth needs, rate limits, or side effects. The lack of contradiction is neutral, but more detail would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (three sentences) and front-loaded with the core purpose. Every sentence adds value: purpose, usage guidance, and parameter requirements. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no output schema, and no annotations, the description covers the main aspects: function, usage context, and key parameters. It could be improved by detailing the output format or differentiating from sibling tools like find_surveys_for_position, but it is largely sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides full descriptions for all parameters (100% coverage). The description adds a summary of required/optional but does not add new meaning beyond the schema. Since the schema is rich, a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to recommend best-fit compensation surveys given a hiring/benchmarking context. It uses a specific verb ('recommend') and resource ('compensation surveys'), and the context of use is distinct from sibling tools like list_vendors_by_industry.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance ('Use when the user asks...') and highlights required (industry) and optional (region, role focus) parameters. It does not explicitly state when not to use or mention alternatives, but the context is clear enough for an agent to decide.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.