Skip to main content
Glama
mattjegan

eBird MCP Server

by mattjegan

get_hotspot_info

Retrieve detailed information about a specific birdwatching hotspot using its location code, such as L99381, to access observation data and site details.

Instructions

Get information about a specific hotspot.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
loc_idYesThe location code (e.g., 'L99381')

Implementation Reference

  • The handler function for the get_hotspot_info tool. It fetches hotspot information from the eBird API endpoint `/ref/hotspot/info/${args.loc_id}` and returns the JSON-stringified result wrapped in the MCP response format.
    async (args) => {
      const result = await makeRequest(`/ref/hotspot/info/${args.loc_id}`);
      return { content: [{ type: "text", text: JSON.stringify(result, null, 2) }] };
    }
  • The input schema for the get_hotspot_info tool, defining the required 'loc_id' parameter as a string using Zod.
    {
      loc_id: z.string().describe("The location code (e.g., 'L99381')"),
    },
  • src/index.ts:440-450 (registration)
    The registration of the get_hotspot_info tool using server.tool(), including the name, description, schema, and inline handler function.
    server.tool(
      "get_hotspot_info",
      "Get information about a specific hotspot.",
      {
        loc_id: z.string().describe("The location code (e.g., 'L99381')"),
      },
      async (args) => {
        const result = await makeRequest(`/ref/hotspot/info/${args.loc_id}`);
        return { content: [{ type: "text", text: JSON.stringify(result, null, 2) }] };
      }
    );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states the action ('Get information') without details on permissions, rate limits, response format, or error handling. This is inadequate for a tool with no structured behavioral hints, leaving significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words, making it highly concise and front-loaded. It efficiently communicates the core purpose without unnecessary elaboration, earning full marks for brevity and structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It does not explain what information is returned, potential errors, or behavioral traits, which is insufficient for a tool that likely returns detailed hotspot data. More context is needed for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with 'loc_id' documented as 'The location code (e.g., 'L99381').' The description adds no additional parameter details beyond this, so it meets the baseline of 3 where the schema handles the heavy lifting without extra value from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool's purpose as 'Get information about a specific hotspot,' which includes a verb ('Get') and resource ('hotspot'), making it clear. However, it lacks specificity about what information is retrieved and does not differentiate from sibling tools like 'get_hotspots_in_region' or 'get_nearby_hotspots,' leaving it vague in context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description does not mention prerequisites, exclusions, or comparisons to sibling tools, such as using this for a single hotspot versus 'get_hotspots_in_region' for multiple hotspots, resulting in minimal usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mattjegan/ebird-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server