Skip to main content
Glama
mattjegan

eBird MCP Server

by mattjegan

get_top_100

Retrieve the top 100 bird observation contributors for a specific date and region, ranking by species or checklist counts using eBird data.

Instructions

Get the top 100 contributors on a given date.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
region_codeYesCountry or subnational1 code
yearYesYear
monthYesMonth
dayYesDay of month
ranked_byNo'spp' for species count, 'cl' for checklist countspp
max_resultsNoLimit results

Implementation Reference

  • The handler function for the 'get_top_100' tool. It constructs parameters, makes an API request to retrieve top 100 contributors for a specific date and region, and returns the result as formatted JSON.
    async (args) => {
      const params: Record<string, string | number | boolean> = { rankedBy: args.ranked_by };
      if (args.max_results) params.maxResults = args.max_results;
    
      const result = await makeRequest(`/product/top100/${args.region_code}/${args.year}/${args.month}/${args.day}`, params);
      return { content: [{ type: "text", text: JSON.stringify(result, null, 2) }] };
    }
  • Zod schema defining the input parameters for the 'get_top_100' tool, including region, date components, ranking metric, and optional max results.
    {
      region_code: z.string().describe("Country or subnational1 code"),
      year: z.number().min(1800).describe("Year"),
      month: z.number().min(1).max(12).describe("Month"),
      day: z.number().min(1).max(31).describe("Day of month"),
      ranked_by: z.enum(["spp", "cl"]).default("spp").describe("'spp' for species count, 'cl' for checklist count"),
      max_results: z.number().min(1).max(100).optional().describe("Limit results"),
    },
  • src/index.ts:287-305 (registration)
    The server.tool call that registers the 'get_top_100' tool with its description, input schema, and handler function.
    server.tool(
      "get_top_100",
      "Get the top 100 contributors on a given date.",
      {
        region_code: z.string().describe("Country or subnational1 code"),
        year: z.number().min(1800).describe("Year"),
        month: z.number().min(1).max(12).describe("Month"),
        day: z.number().min(1).max(31).describe("Day of month"),
        ranked_by: z.enum(["spp", "cl"]).default("spp").describe("'spp' for species count, 'cl' for checklist count"),
        max_results: z.number().min(1).max(100).optional().describe("Limit results"),
      },
      async (args) => {
        const params: Record<string, string | number | boolean> = { rankedBy: args.ranked_by };
        if (args.max_results) params.maxResults = args.max_results;
    
        const result = await makeRequest(`/product/top100/${args.region_code}/${args.year}/${args.month}/${args.day}`, params);
        return { content: [{ type: "text", text: JSON.stringify(result, null, 2) }] };
      }
    );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions retrieving data ('Get'), implying a read-only operation, but fails to specify critical traits like rate limits, authentication needs, pagination, or the format of returned results (e.g., list of contributors with details). This leaves significant gaps in understanding how the tool behaves beyond basic input-output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It directly states what the tool does, making it easy to parse quickly, and every part of the sentence contributes essential information, earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no annotations, no output schema), the description is insufficiently complete. It doesn't explain the return format (e.g., what data 'contributors' includes), behavioral constraints, or how it integrates with sibling tools. For a data retrieval tool with multiple inputs and no structured output, more context is needed to ensure proper agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal semantic context beyond the input schema, which has 100% coverage with detailed parameter descriptions. It implies date-based filtering via 'on a given date,' but the schema already covers 'year,' 'month,' and 'day' parameters explicitly. No additional syntax or usage nuances are provided, so the baseline score of 3 is appropriate as the schema does most of the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and target resource ('top 100 contributors'), and specifies the context ('on a given date'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_regional_statistics' or 'get_recent_observations' that might also involve contributor data, leaving some ambiguity about uniqueness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as which sibling tools might overlap in functionality (e.g., 'get_regional_statistics' for broader data or 'get_recent_observations' for time-based queries). It lacks explicit context, prerequisites, or exclusions, leaving the agent to infer usage from parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mattjegan/ebird-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server