Skip to main content
Glama
alex-rimerman

Statcast MCP Server

statcast_pitcher_arsenal_stats

Analyze pitcher performance by pitch type with batting average, slugging, whiff rate, and run values to evaluate effectiveness of specific pitches.

Instructions

Get performance stats broken down by pitch type for each pitcher.

Shows batting average, slugging, whiff rate, put-away rate, and run values for every individual pitch type a pitcher throws.

Args: year: Season year (e.g. 2024). min_plate_appearances: Minimum PA for each pitch type (default 25). player_name: Optional. Filter to one pitcher (all their pitch types).

Great for evaluating which specific pitches are most (or least) effective.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
yearYes
min_plate_appearancesNo
player_nameNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler function `statcast_pitcher_arsenal_stats` is defined in `src/statcast_mcp/server.py`. It uses the `pybaseball` library to fetch data and applies filtering if a player name is provided.
    def statcast_pitcher_arsenal_stats(
        year: int,
        min_plate_appearances: int = 25,
        player_name: str | None = None,
    ) -> str:
        """Get performance stats broken down by pitch type for each pitcher.
    
        Shows batting average, slugging, whiff rate, put-away rate, and run values
        for every individual pitch type a pitcher throws.
    
        Args:
            year: Season year (e.g. 2024).
            min_plate_appearances: Minimum PA for each pitch type (default 25).
            player_name: Optional. Filter to one pitcher (all their pitch types).
    
        Great for evaluating which specific pitches are most (or least) effective.
        """
        from pybaseball import statcast_pitcher_arsenal_stats as _fn
    
        try:
            data = _fn(year, minPA=min_plate_appearances)
        except Exception as e:
            return f"Error fetching arsenal stats: {e}"
    
        if player_name:
            try:
                data = _filter_player_rows(data, player_name)
            except ValueError as e:
                return str(e)
            if data.empty:
                return (
                    f"No pitcher arsenal stats for {player_name} in {year} at "
                    f"{min_plate_appearances}+ PA per pitch type."
                )
    
        return _fmt(data, max_rows=50)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses what data is returned (the specific pitch-type metrics), which is helpful. However, it lacks operational details like rate limits, error conditions, or whether the data is real-time vs historical.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The content is well-structured with purpose first, followed by return details, Args documentation, and a use-case summary. The line breaks within sentences are slightly inefficient, but every sentence adds necessary information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately summarizes the key metrics rather than detailing the full return structure. All parameters are documented. It could be improved by noting data source constraints or season availability limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the Args section thoroughly documents all three parameters: 'year' includes a format example (2024), 'min_plate_appearances' notes the default value (25), and 'player_name' clarifies its optional filter behavior. This fully compensates for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') and resource ('performance stats broken down by pitch type for each pitcher'), then enumerates exact metrics returned (batting average, slugging, whiff rate, put-away rate, run values). This precisely distinguishes it from general pitching stats or batter-focused siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides an implied use case ('Great for evaluating which specific pitches are most effective'), but fails to explicitly differentiate from the similarly-named sibling 'statcast_pitcher_pitch_arsenal' or state when to prefer this over 'statcast_pitcher'. No exclusions or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/alex-rimerman/statcast-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server