Skip to main content
Glama
zafronix

World Cup History MCP

compare_tournaments

Compare up to 6 World Cup tournaments side-by-side to analyze changes in goals, attendance, top scorers, and champions across years.

Instructions

Side-by-side comparison of 2-6 World Cup tournaments. Returns total goals, goals per match, attendance, top scorer, best player, champion, runner-up, third place for each year. Use this when the user asks "compare 1986 vs 2022" or "what changed between 1990 and 2014". For a single year use get_tournament.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
yearsYesYears to compare, e.g. [1986, 2002, 2022]

Implementation Reference

  • The handler for compare_tournaments — accepts an array of 1-6 years and calls the /compare API endpoint with comma-separated years.
    handler: async (args: { years: number[] }) =>
      api(`/compare?years=${args.years.join(',')}`),
  • Zod input schema for compare_tournaments — expects a 'years' array of integers (min 1, max 6) between 1930 and 2030.
    schema: z.object({
      years: z.array(z.number().int().min(1930).max(2030)).min(1).max(6)
        .describe('Years to compare, e.g. [1986, 2002, 2022]'),
    }).strict(),
  • src/index.ts:136-149 (registration)
    The tool registration in the tools array — defines name, description, schema, and handler for the compare_tournaments tool.
    {
      name: 'compare_tournaments',
      description:
        'Side-by-side comparison of 2-6 World Cup tournaments. Returns total goals, goals ' +
        'per match, attendance, top scorer, best player, champion, runner-up, third place ' +
        'for each year. Use this when the user asks "compare 1986 vs 2022" or "what changed ' +
        'between 1990 and 2014". For a single year use get_tournament.',
      schema: z.object({
        years: z.array(z.number().int().min(1930).max(2030)).min(1).max(6)
          .describe('Years to compare, e.g. [1986, 2002, 2022]'),
      }).strict(),
      handler: async (args: { years: number[] }) =>
        api(`/compare?years=${args.years.join(',')}`),
    },
  • The shared API fetch helper used by the compare_tournaments handler to call the external API with auth headers.
    async function api<T = unknown>(path: string): Promise<T> {
      if (!API_KEY) {
        throw new Error(
          'WC_API_KEY is not set in the environment. Get a free key at ' +
          'https://api.zafronix.com/signup and add it to your MCP client ' +
          'config: { "env": { "WC_API_KEY": "zwc_pk_..." } }',
        );
      }
      const url = path.startsWith('http') ? path : `${API_BASE}${path}`;
      const res = await fetch(url, {
        headers: {
          'X-API-Key':  API_KEY,
          'Accept':     'application/json',
          'User-Agent': 'wc-mcp/0.1.2',
        },
      });
      if (!res.ok) {
        const body = await res.text().catch(() => '');
        throw new Error(`API ${res.status} ${res.statusText} on ${path}: ${body.slice(0, 240)}`);
      }
      return res.json() as Promise<T>;
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It lists returned fields and implies read-only behavior, but does not mention error handling or ordering.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff, direct and informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description adequately covers return values and usage constraints. Complete for a comparison tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema already describes the 'years' parameter with example. Description adds the constraint of 2-6 tournaments, adding value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'compare' with specific resource 'tournaments', explicit mention of what is returned (goals, attendance, etc.), and distinguishes from get_tournament for single year.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (e.g., 'compare 1986 vs 2022') and when not to use (single year use get_tournament), providing a clear alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zafronix/wc-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server