golf-majors
Server Details
Golf Majors 2026 — Masters, PGA Championship, US Open, The Open. Tournaments, venues, players.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 5 of 5 tools scored. Lowest: 3.4/5.
Each tool has a clearly distinct purpose: next major, players, single tournament, list of tournaments, venues. No overlapping functionality.
All tools use a consistent verb_noun pattern (getXxx) with a uniform prefix 'whensport_golf_', making them predictable and easy to understand.
With 5 tools, the set is well-scoped for a specialized golf majors schedule server—neither too sparse nor excessive.
Covers core needs (schedule, tournaments, venues, players), but lacks search or filter capabilities. Data gaps noted in descriptions are about backend ingestion, not missing endpoints.
Available Tools
5 toolswhensport_golf_getNextMajorGet the next upcoming golf MajorARead-onlyInspect
Get the next upcoming Major (Masters / PGA / US Open / The Open).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, so the description doesn't need to reiterate safety. It adds value by specifying the set of majors, which is helpful context beyond the structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with clear verb and noun, front-loaded, no unnecessary words. Every element earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless, read-only tool with no output schema, the description adequately covers what the tool returns (the next major from a defined list). Some may desire more details on the return format, but it's sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description naturally cannot add parameter semantics. The baseline is set to 4 due to no parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get the next upcoming Major' and lists the four majors (Masters, PGA, US Open, The Open), which distinguishes it from sibling tools that deal with players, tournaments, and venues.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving the next major, and sibling names make it clear when not to use this tool, but explicit exclusions or alternatives are not provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_golf_getPlayersGet the golf player rosterARead-onlyInspect
Get the golf player roster covered by whensport.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, so description does not need to add that. Description adds no additional behavioral traits beyond what annotations convey. No extra context on rate limits or other aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no redundancy, front-loaded with key action and resource. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no parameters, no output schema, and a simple read operation, the description is complete enough. All necessary information is conveyed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so description does not need to explain parameters. Baseline score of 4 is appropriate for no-parameter tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Get' and resource 'golf player roster' with context 'covered by whensport'. Distinguishes from sibling tools which focus on tournaments, venues, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. However, the tool is simple with no parameters, and siblings address different resources, making usage straightforward. Lacks explicit alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_golf_getTournamentGet a single golf tournament by slugARead-onlyInspect
Get a single golf tournament by slug (e.g. 'the-masters', 'pga-championship', 'us-open', 'the-open' for Majors). Note: result/winner on finished tournaments may be null pending data backfill — consult primary sources for confirmed leaderboards.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description goes beyond the readOnlyHint annotation by disclosing that results may be null pending data backfill, which is critical behavioral information. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are concise and front-loaded with the core purpose, immediately followed by examples and a data caveat. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one parameter and no output schema, the description is complete. It explains what it does, how to use it, and potential data issues, covering key aspects an agent needs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the description compensates by providing concrete examples of valid slug values and indicating it is required. It does not specify format constraints like case sensitivity, but it's adequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it gets a single golf tournament by slug, with specific examples of slug values (e.g., 'the-masters'). The purpose is specific and distinguishes from sibling tools like getTournaments or getNextMajor.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells the user to use a slug to retrieve a specific tournament and warns about null results for finished tournaments, advising to consult primary sources. While it doesn't explicitly contrast with siblings, the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_golf_getTournamentsGet the golf tournament listBRead-onlyInspect
Get the golf tournament list — Masters, PGA Championship, US Open, Open Championship. Note: this MCP is schedule-focused; winners/leaderboards on finished tournaments may be null while ingestion catches up — consult masters.com / pgatour.com / usga.org / theopen.com for confirmed results.
| Name | Required | Description | Default |
|---|---|---|---|
| upcomingOnly | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that winners/leaderboards may be null due to ingestion lag, adding behavioral context beyond the readOnlyHint annotation. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose, second adds important caveat. No wasted words, front-loaded. Well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one parameter and no output schema, the description covers the core functionality and a key behavioral note. However, it omits details about what the list contains (e.g., dates, status) and the parameter, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'upcomingOnly' is not explained in the description. With 0% schema description coverage, the description should clarify its meaning and effect, but fails to do so. The parameter name is somewhat self-explanatory, but not fully.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get the golf tournament list' and names specific majors, making the purpose unambiguous. However, it does not explicitly differentiate from sibling tools like 'getNextMajor' or 'getTournament', so it loses a point for lack of differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The note about being schedule-focused and winners possibly being null provides usage context about data freshness and reliability, but does not explicitly state when to use this tool over siblings such as 'whensport_golf_getNextMajor' or 'whensport_golf_getTournament'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whensport_golf_getVenuesGet the golf venue listARead-onlyInspect
Get the golf venue list (courses, pars, locations).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, so the description adds minimal behavioral context beyond stating output includes courses, pars, and locations. No additional constraints or side effects disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. Information is front-loaded with the core purpose and specific attributes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless read-only tool with simple output (list of venues with courses, pars, locations), the description sufficiently covers functionality. No output schema, but the description outlines return content adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist in schema, so description naturally adds no parameter detail. With 100% schema description coverage (zero parameters), baseline score of 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly describes the tool's purpose with verb 'Get' and resource 'golf venue list', and specifies the included attributes (courses, pars, locations). Distinct from sibling tools which target different entities (majors, players, tournaments).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for retrieving venue information, but does not explicitly state when to prefer this tool over alternatives like getTournaments or getPlayers. No exclusionary guidance provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!