get_cricket_news
Fetch cricket news updates with headlines, descriptions, timestamps, categories, and direct article URLs using a reliable MCP server for cricket data.
Instructions
Get the latest cricket news from Cricbuzz.
Returns: list: A list of dictionaries, each containing news details including headline, description, timestamp, category, and a direct URL to the article.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- cricket_server.py:372-372 (registration)The @mcp.tool() decorator registers the get_cricket_news function as an MCP tool.@mcp.tool()
- cricket_server.py:373-380 (schema)Function signature with no input parameters and return type list[str], along with docstring describing the tool's purpose and output format.def get_cricket_news() -> list: """ Get the latest cricket news from Cricbuzz. Returns: list: A list of dictionaries, each containing news details including headline, description, timestamp, category, and a direct URL to the article. """
- cricket_server.py:372-431 (handler)The complete implementation of the get_cricket_news tool handler. It fetches the latest cricket news from Cricbuzz by scraping the news page, parsing headlines, descriptions, timestamps, categories, and constructing URLs for each news item. Includes error handling for network issues.@mcp.tool() def get_cricket_news() -> list: """ Get the latest cricket news from Cricbuzz. Returns: list: A list of dictionaries, each containing news details including headline, description, timestamp, category, and a direct URL to the article. """ link = "https://www.cricbuzz.com/cricket-news" try: response = requests.get(link, headers=HEADERS, timeout=10) response.raise_for_status() source = response.text page = BeautifulSoup(source, "lxml") news_list = [] news_container = page.find("div", id="news-list") if not news_container: return [{"error": "Could not find the news container"}] stories = news_container.find_all("div", class_="cb-col cb-col-100 cb-lst-itm cb-pos-rel cb-lst-itm-lg") for story in stories: news_item = {} headline_tag = story.find("a", class_="cb-nws-hdln-ancr") if headline_tag: news_item["headline"] = headline_tag.get("title", "").strip() news_item["url"] = "https://www.cricbuzz.com" + headline_tag.get("href", "") description_tag = story.find("div", class_="cb-nws-intr") if description_tag: news_item["description"] = description_tag.text.strip() time_tag = story.find("span", class_="cb-nws-time") if time_tag: news_item["timestamp"] = time_tag.text.strip() category_tag = story.find("div", class_="cb-nws-time") if category_tag: category_text = category_tag.text.strip() if "•" in category_text: parts = category_text.split("•") if len(parts) > 1: news_item["category"] = parts[1].strip() if news_item: news_list.append(news_item) return news_list except requests.exceptions.ConnectionError as e: return [{"error": f"Connection error: {str(e)}"}] except requests.exceptions.Timeout as e: return [{"error": f"Request timeout: {str(e)}"}] except requests.exceptions.HTTPError as e: return [{"error": f"HTTP error: {str(e)}"}] except Exception as e: return [{"error": f"Failed to get cricket news: {str(e)}"}]