Skip to main content
Glama

get_team_awards

Read-onlyIdempotent

Retrieve all awards won by an FRC team in a given season year, including award name, type, event, year, and recipient list. Useful for tracking annual recognition like the Impact Award.

Instructions

Retrieve every award won by a team during a single FRC season year. Returns award name, award type code, event key where the award was given, year, and recipient list (team key plus individual awardee for honors like Woodie Flowers Finalist). Useful for tracking annual recognition such as the Impact Award (formerly Chairman's Award), Engineering Inspiration, regional/district event winners, Excellence in Engineering, Innovation in Control, and other technical awards. For lifetime awards see get_team_awards_all.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
team_keyYesFRC team key formatted as 'frc' followed by the team number with no leading zeros (e.g., 'frc86', 'frc254', 'frc1114'). Uniquely identifies a FIRST Robotics Competition team on The Blue Alliance.
yearYesFRC competition season year. FRC began in 1992 and runs one game per year (e.g., 2023 = "Charged Up", 2024 = "Crescendo", 2025 = "Reefscape"). Must be between 1992 and next calendar year.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent behavior. The description adds valuable context about the return structure (e.g., recipient list includes team key and individual awardee) and example awards (Impact Award), which enhances understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loading the core purpose and return data, then providing usage guidance and sibling differentiation. No extraneous information; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple input (team+year) and no output schema, the description adequately covers what the agent needs to know: what is returned and when to use it. However, it does not mention potential edge cases (e.g., team with no awards) or pagination, but those are less critical for this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, providing clear explanations for both team_key (format, example) and year (range, game examples). The description does not add additional meaning beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves awards won by a team in a single FRC season year, specifying the returned fields (award name, type code, event key, year, recipient list). It also distinguishes itself from get_team_awards_all (lifetime awards), ensuring no ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides use cases ('useful for tracking annual recognition...') and directs to an alternative for lifetime awards (get_team_awards_all). This clearly guides when to use this tool vs. a sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/withinfocus/tba-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server