search_torrents
Find torrents across ThePirateBay, Nyaa, and YggTorrent using a specific query. Supports filtering by sources and limiting results to streamline your search process.
Instructions
Search for torrents on sources [thepiratebay.org, nyaa.si, yggtorrent].
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| max_items | No | ||
| query | Yes | ||
| sources | No |
Implementation Reference
- torrent_search/mcp_server.py:41-61 (handler)MCP tool handler for the 'search_torrents' tool. It invokes the TorrentSearchApi to perform the search and formats the output as a string response.@mcp.tool() async def search_torrents(query: str) -> str: """Searches for torrents using a query (space-separated keywords) and returns a list of torrent results. # Instructions: - To be called after `prepare_search_query`. - Prioritize results using the following hierarchy: is 1080p > smaller file size > is x265 > max seeders+leechers. - Recommend up to 3 of the best results, **always** providing filename, file size, seeders/leechers, date, source, and an ultra concise reason. - If the search results are too broad, suggest the user provide more specific keywords. - Keep recommendations and suggestions concise. """ logger.info(f"Searching for torrents: {query}") found_torrents: list[Torrent] = await torrent_search_api.search_torrents(query) if not found_torrents: return "No torrents found" elif found_torrents and not INCLUDE_LINKS: # Greatly reduce token usage shorted_torrents = deepcopy(found_torrents) # Leave cache intact for torrent in shorted_torrents: torrent.magnet_link = None torrent.torrent_file = None return "\n".join([str(torrent) for torrent in shorted_torrents]) return "\n".join([str(torrent) for torrent in found_torrents])
- TorrentSearchApi.search_torrents method that aggregates torrent results from scraper sources and YGG Torrent, sorts by seeders+leechers, and caches results.@cached(ttl=300, key_builder=key_builder) # type: ignore[untyped-decorator] # 5min async def search_torrents( self, query: str, max_items: int = 10, ) -> list[Torrent]: """ Search for torrents on ThePirateBay, Nyaa and YGG Torrent. Args: query: Search query. max_items: Maximum number of items to return. Returns: A list of torrent results. """ found_torrents: list[Torrent] = [] if any(source != "yggtorrent" for source in SOURCES): found_torrents.extend(await search_torrents(query, SOURCES)) if "yggtorrent" in SOURCES: found_torrents.extend( [ Torrent.format(**torrent.model_dump(), source="yggtorrent") for torrent in ygg_api.search_torrents(query) ] ) found_torrents = list( sorted( found_torrents, key=lambda torrent: torrent.seeders + torrent.leechers, reverse=True, ) )[:max_items] for torrent in found_torrents: torrent.prepend_info(query, max_items) self.CACHE.clean() # Clean cache routine self.CACHE.update(found_torrents) return found_torrents
- Core implementation that scrapes torrent sites (ThePirateBay, Nyaa) using crawl4ai, parses results, extracts Torrent objects with retries.async def search_torrents( query: str, sources: list[str] | None = None, max_retries: int = 1, ) -> list[Torrent]: """ Search for torrents on ThePirateBay and Nyaa. Corresponds to GET /torrents Args: query: Search query. sources: List of valid sources to scrape from. max_retries: Maximum number of retries. Returns: A list of torrent results. """ start_time = time() scraped_results: list[str] = await scrape_torrents(query, sources=sources) torrents: list[Torrent] = [] retries = 0 while retries < max_retries: try: torrents = extract_torrents(scraped_results) print(f"Successfully extracted results in {time() - start_time:.2f} sec.") return torrents except Exception: retries += 1 print(f"Failed to extract results: Attempt {retries}/{max_retries}") print( f"Exhausted all {max_retries} retries. " f"Returning empty list. Total time: {time() - start_time:.2f} sec." ) return torrents