Skip to main content
Glama
mchow01

Horse Racing News

by mchow01

get_horse_racing_news

Fetch horse racing news from Thoroughbred Daily News to stay informed about industry updates, race results, and thoroughbred developments.

Instructions

Fetch the latest horse racing news from Thoroughbred Daily News.

Args: limit: Maximum number of stories to return (default: 10, max: 50)

Returns: Dictionary containing feed information and list of news stories

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNo

Implementation Reference

  • The main handler function for the 'get_horse_racing_news' tool. Registered via @mcp.tool() decorator. Validates input limit, fetches and parses RSS feed using helper function, limits results, and returns structured dictionary with feed info and news stories.
    @mcp.tool()
    def get_horse_racing_news(limit: int = 10) -> Dict[str, Any]:
        """
        Fetch the latest horse racing news from Thoroughbred Daily News.
        
        Args:
            limit: Maximum number of stories to return (default: 10, max: 50)
        
        Returns:
            Dictionary containing feed information and list of news stories
        """
        # Validate limit parameter
        if limit < 1:
            limit = 1
        elif limit > 50:
            limit = 50
        
        rss_url = "https://www.thoroughbreddailynews.com/feed/"
        
        result = parse_rss_feed(rss_url)
        
        # If there was an error, return it
        if "error" in result:
            return result
        
        # Limit the number of stories returned
        if "stories" in result and len(result["stories"]) > limit:
            result["stories"] = result["stories"][:limit]
            result["total_stories"] = len(result["stories"])
            result["note"] = f"Limited to {limit} stories (use limit parameter to adjust)"
        
        return result
  • Core helper function that fetches RSS feed from given URL, parses XML, extracts feed metadata and story items, cleans content using clean_html, and structures the data or handles errors.
    def parse_rss_feed(url: str) -> Dict[str, Any]:
        """Fetch and parse RSS feed, returning structured data."""
        try:
            # Fetch the RSS feed
            response = requests.get(url, timeout=10)
            response.raise_for_status()
            
            # Parse the XML
            root = ET.fromstring(response.content)
            
            # Find the channel element
            channel = root.find('channel')
            if channel is None:
                return {"error": "Could not find channel element in RSS feed"}
            
            # Extract feed metadata
            feed_info = {
                "title": channel.find('title').text if channel.find('title') is not None else "N/A",
                "description": channel.find('description').text if channel.find('description') is not None else "N/A",
                "link": channel.find('link').text if channel.find('link') is not None else "N/A"
            }
            
            # Find all item elements
            items = channel.findall('item')
            
            stories = []
            for item in items:
                title = item.find('title')
                description = item.find('description')
                link = item.find('link')
                pub_date = item.find('pubDate')
                
                story = {
                    "title": clean_html(title.text) if title is not None else "No title available",
                    "content": clean_html(description.text) if description is not None else "No content available",
                    "link": link.text if link is not None else "No link available",
                    "published": pub_date.text if pub_date is not None else "No date available"
                }
                stories.append(story)
            
            return {
                "feed_info": feed_info,
                "stories": stories,
                "total_stories": len(stories)
            }
        
        except requests.RequestException as e:
            return {"error": f"Error fetching RSS feed: {str(e)}"}
        except ET.ParseError as e:
            return {"error": f"Error parsing XML: {str(e)}"}
        except Exception as e:
            return {"error": f"Unexpected error: {str(e)}"}
  • Utility helper to clean HTML tags, decode entities, and normalize whitespace from text extracted from RSS descriptions and titles.
    def clean_html(text: str) -> str:
        """Remove HTML tags and decode HTML entities from text."""
        if not text:
            return ""
        
        # Remove HTML tags
        clean = re.compile('<.*?>')
        text = re.sub(clean, '', text)
        
        # Decode HTML entities
        text = unescape(text)
        
        # Clean up extra whitespace
        text = ' '.join(text.split())
        
        return text
  • The @mcp.tool() decorator registers the get_horse_racing_news function as an MCP tool.
    @mcp.tool()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It states the tool fetches news, implying a read-only operation, but does not specify if it requires authentication, has rate limits, or details about the fetch mechanism (e.g., real-time vs. cached). The description adds minimal behavioral context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose. The Args and Returns sections are structured clearly, though the 'Returns' section is vague ('Dictionary containing feed information and list of news stories'). Overall, it avoids unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It does not explain the structure of the returned dictionary, error handling, or any side effects. For a tool with no structured data support, more detail on behavior and outputs is needed to be fully informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning for the single parameter 'limit' by explaining it controls the 'maximum number of stories to return' and provides default and max values. However, with 0% schema description coverage, the schema lacks descriptions, so the description compensates partially but does not fully detail parameter behavior or constraints beyond what's implied.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fetch the latest horse racing news from Thoroughbred Daily News.' It specifies the verb ('fetch') and resource ('horse racing news'), and identifies the source. However, with no sibling tools, it cannot demonstrate differentiation from alternatives, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, context for usage, or exclusions. The lack of sibling tools means no explicit comparisons are needed, but general usage context is still missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mchow01/horseracingnews'

If you have feedback or need assistance with the MCP directory API, please join our Discord server