Skip to main content
Glama
Red5d

Jupiter Broadcasting Podcast Data MCP Server

by Red5d

search_episodes

Find podcast episodes by show, date, host, or content search to locate specific Jupiter Broadcasting content.

Instructions

Search for episodes based on various criteria. At least one search parameter must be provided.

Args: show_name: Name of the specific show to search in (required) since_date: Only return episodes published on or after this date (YYYY-MM-DD or ISO format) before_date: Only return episodes published before this date (YYYY-MM-DD or ISO format) hosts: List of host names to filter by text_search: Search text to match against episode titles and descriptions page: Page number (1-indexed, default: 1) per_page: Number of results per page (default: 5)

Returns: Dictionary containing episodes, pagination info (total, page, per_page, total_pages).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
show_nameYes
since_dateNo
before_dateNo
hostsNo
text_searchNo
pageNo
per_pageNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • MCP tool handler for 'search_episodes'. Registered via @mcp.tool() decorator. Validates inputs, delegates search to rss_parser, adds pagination, and handles errors. Function signature and docstring define the tool schema.
    @mcp.tool()
    def search_episodes(
        show_name: str,
        since_date: Optional[str] = None,
        before_date: Optional[str] = None,
        hosts: Optional[List[str]] = None,
        text_search: Optional[str] = None,
        page: int = 1,
        per_page: int = 5,
    ) -> Dict[str, Any]:
        """Search for episodes based on various criteria. At least one search parameter must be provided.
        
        Args:
            show_name: Name of the specific show to search in (required)
            since_date: Only return episodes published on or after this date (YYYY-MM-DD or ISO format)
            before_date: Only return episodes published before this date (YYYY-MM-DD or ISO format)
            hosts: List of host names to filter by
            text_search: Search text to match against episode titles and descriptions
            page: Page number (1-indexed, default: 1)
            per_page: Number of results per page (default: 5)
        
        Returns:
            Dictionary containing episodes, pagination info (total, page, per_page, total_pages).
        """
        try:
            results = rss_parser.search_episodes(
                show_name=show_name,
                since_date=since_date,
                before_date=before_date,
                hosts=hosts,
                text_search=text_search,
            )
            
            total = len(results)
            total_pages = (total + per_page - 1) // per_page if per_page > 0 else 0
            start_idx = (page - 1) * per_page
            end_idx = start_idx + per_page
            
            return {
                "episodes": results[start_idx:end_idx],
                "pagination": {
                    "total": total,
                    "page": page,
                    "per_page": per_page,
                    "total_pages": total_pages,
                }
            }
        except ValueError as e:
            return {"error": str(e)}
        except Exception as e:
            return {"error": f"Search failed: {str(e)}"}
  • Helper method in PodcastRSSParser class implementing the core search logic: fetches RSS feeds, parses episodes, applies date/host/text filters, and collects matching episodes.
    def search_episodes(
        self,
        show_name: Optional[str] = None,
        since_date: Optional[str] = None,
        before_date: Optional[str] = None,
        hosts: Optional[List[str]] = None,
        text_search: Optional[str] = None,
    ) -> List[Dict[str, Any]]:
        """Search episodes based on provided criteria."""
        if not any([show_name, since_date, before_date, hosts, text_search]):
            raise ValueError("At least one search parameter must be provided")
        
        results = []
        
        # Determine which shows to search
        shows_to_search = [show_name] if show_name else self.get_shows()
        
        # Parse date filters
        since_dt = None
        before_dt = None
        if since_date:
            since_dt = self._parse_date(since_date)
        if before_date:
            before_dt = self._parse_date(before_date)
        
        for show in shows_to_search:
            feed_root = self._get_feed(show)
            if feed_root is None:
                continue
                
            # Find all item elements (episodes)
            items = feed_root.xpath('//item')
            for item in items:
                episode_data = self._parse_episode(show, item)
                
                # Apply filters
                if since_dt and episode_data.get("published_date"):
                    episode_dt = self._parse_date(episode_data["published_date"])
                    if episode_dt and episode_dt < since_dt:
                        continue
                
                if before_dt and episode_data.get("published_date"):
                    episode_dt = self._parse_date(episode_data["published_date"])
                    if episode_dt and episode_dt > before_dt:
                        continue
                
                if hosts:
                    episode_hosts = episode_data.get("hosts", [])
                    if not any(host.lower() in [h.lower() for h in episode_hosts] for host in hosts):
                        continue
                
                if text_search:
                    search_text = text_search.lower()
                    title = episode_data.get("title", "").lower()
                    description = episode_data.get("description", "").lower()
                    if search_text not in title and search_text not in description:
                        continue
                
                results.append(episode_data)
        
        return results
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that at least one search parameter is required and describes the return format, but lacks details on permissions, rate limits, error handling, or whether this is a read-only operation. For a search tool with 7 parameters, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear opening sentence, followed by organized 'Args' and 'Returns' sections. It's appropriately sized for a tool with 7 parameters, though the 'At least one search parameter must be provided' note could be integrated more smoothly into the parameter descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (7 parameters, no annotations, but with an output schema), the description is largely complete. It covers all parameters in detail and describes the return format, though it could benefit from more behavioral context (e.g., read-only nature, error cases). The output schema reduces the need to fully explain returns, but some operational guidance is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must fully compensate. It provides detailed semantics for all 7 parameters, including required status, formats (e.g., YYYY-MM-DD or ISO for dates), defaults (page: 1, per_page: 5), and usage context (e.g., 'search text to match against episode titles and descriptions'). This adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search for episodes based on various criteria.' It specifies the resource (episodes) and action (search), though it doesn't explicitly differentiate from sibling tools like 'get_episode' or 'list_shows' beyond the search functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context: 'At least one search parameter must be provided' and implies usage for filtered searches. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_episode' (for single episodes) or 'list_shows' (for shows rather than episodes), leaving the agent to infer distinctions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Red5d/jupiterbroadcasting_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server