Skip to main content
Glama
revelri

lutris-source-mcp

by revelri

prepare_install_source

Searches Prowlarr for a game, downloads via qBittorrent, polls until complete, then classifies and returns a path for Lutris installation.

Instructions

Search Prowlarr, hand off the best candidate to qBittorrent, poll until complete (5-min stall timeout, configurable), classify the tree, and return a path lutris-mcp's install_from_yaml or install_from_directory consumes directly. mutates: true

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
indexer_categoriesNo
indexer_idsNo
max_size_gbNo
min_seedersNo
freeleech_onlyNo
stall_timeout_secondsNo
poll_interval_secondsNo
confirmNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function for the 'prepare_install_source' MCP tool. Decorated with @mcp.tool to register as an MCP tool and @confirm_required to enforce a two-step confirm flow. Searches Prowlarr, filters/ranks candidates, hands off the best to qBittorrent, polls until download completes (with stall detection), classifies the result, and returns path/kind/source metadata.
    @mcp.tool(
        description="Search Prowlarr, hand off the best candidate to qBittorrent, "
        "poll until complete (5-min stall timeout, configurable), classify the "
        "tree, and return a path lutris-mcp's install_from_yaml or "
        "install_from_directory consumes directly. mutates: true"
    )
    @confirm_required("prepare_install_source")
    def prepare_install_source(
        query: str,
        *,
        indexer_categories: list[int] | None = None,
        indexer_ids: list[int] | None = None,
        max_size_gb: float | None = None,
        min_seeders: int | None = None,
        freeleech_only: bool | None = None,
        stall_timeout_seconds: float = DEFAULT_STALL_TIMEOUT_S,
        poll_interval_seconds: float = DEFAULT_POLL_INTERVAL_S,
        confirm: bool = False,
    ) -> dict[str, Any]:
        cfg = _cfg.load()
        pol = cfg.policy
        eff_max_bytes = int((max_size_gb if max_size_gb is not None else pol.max_size_gb) * 1024**3)
        eff_min_seeders = min_seeders if min_seeders is not None else pol.min_seeders
        eff_freeleech = freeleech_only if freeleech_only is not None else pol.freeleech_only
    
        pw = Prowlarr(cfg.prowlarr)
        raw_releases = pw.search(
            query,
            indexer_ids=indexer_ids or (pol.allow_indexers or None),
            categories=indexer_categories,
        )
        candidates = [normalize_release(r) for r in raw_releases]
        ranked = _ranking.filter_and_rank(
            candidates,
            blocklist=pol.blocklist,
            max_size_bytes=eff_max_bytes,
            min_seeders=eff_min_seeders,
            freeleech_only=eff_freeleech,
        )
        if not ranked:
            return {
                "ok": False,
                "reason": "no_candidates",
                "considered": len(candidates),
                "filters": {
                    "blocklist": pol.blocklist,
                    "max_size_gb": eff_max_bytes / 1024**3,
                    "min_seeders": eff_min_seeders,
                    "freeleech_only": eff_freeleech,
                },
            }
    
        pick = ranked[0]
        qb = Qbittorrent(cfg.qbittorrent)
        started = time.time()
        qb.add_url(pick["download_url"], save_path=cfg.qbittorrent.download_dir or None)
    
        # Wait briefly for metadata so qBittorrent reports the infohash and files.
        infohash = pick["infohash"]
        if not infohash:
            infohash = _wait_for_infohash(qb, pick["title"], DEFAULT_METADATA_WAIT_S)
            if not infohash:
                return {"ok": False, "reason": "metadata_timeout", "title": pick["title"]}
    
        last_dl = 0
        last_progress_at = time.time()
        while True:
            info = qb.info(infohash)
            if info is None:
                time.sleep(poll_interval_seconds)
                continue
            dl = int(info.get("downloaded") or 0)
            elapsed = time.time() - started
            rate_kib = (dl - last_dl) / max(poll_interval_seconds, 1.0) / 1024.0
            _emit_heartbeat(infohash, info, rate_kib, elapsed)
            state = info.get("state", "")
            if state in ("uploading", "stalledUP", "queuedUP", "pausedUP", "checkingUP", "forcedUP"):
                break
            if dl > last_dl:
                last_dl = dl
                last_progress_at = time.time()
            elif (time.time() - last_progress_at) > stall_timeout_seconds:
                return {
                    "ok": False,
                    "reason": "stalled",
                    "infohash": infohash,
                    "downloaded_bytes": dl,
                    "elapsed_seconds": round(elapsed, 1),
                }
            time.sleep(poll_interval_seconds)
    
        final_info = qb.info(infohash) or {}
        content_path = final_info.get("content_path") or final_info.get("save_path")
        if not content_path or not _classify.readable(content_path):
            return {
                "ok": False,
                "reason": "path_unreadable",
                "infohash": infohash,
                "content_path": content_path,
            }
        kind = _classify.classify(content_path)
        return {
            "ok": True,
            "path": str(Path(content_path).resolve()),
            "kind": kind,
            "source": {
                "indexer": pick["indexer"],
                "title": pick["title"],
                "size_bytes": pick["size_bytes"],
                "infohash": infohash,
                "freeleech": pick["freeleech"],
                "indexer_priority": pick["indexer_priority"],
            },
            "elapsed_seconds": round(time.time() - started, 1),
        }
  • Parameter schema for the tool: query (required), optional indexer filters, size/seeders/freeleech constraints, stall/poll timing, and a confirm flag.
    def prepare_install_source(
        query: str,
        *,
        indexer_categories: list[int] | None = None,
        indexer_ids: list[int] | None = None,
        max_size_gb: float | None = None,
        min_seeders: int | None = None,
        freeleech_only: bool | None = None,
        stall_timeout_seconds: float = DEFAULT_STALL_TIMEOUT_S,
        poll_interval_seconds: float = DEFAULT_POLL_INTERVAL_S,
        confirm: bool = False,
    ) -> dict[str, Any]:
  • Registration of the function as an MCP tool via the @mcp.tool() decorator on the FastMCP instance from server.py.
    @mcp.tool(
        description="Search Prowlarr, hand off the best candidate to qBittorrent, "
        "poll until complete (5-min stall timeout, configurable), classify the "
        "tree, and return a path lutris-mcp's install_from_yaml or "
        "install_from_directory consumes directly. mutates: true"
    )
  • Helper that logs and prints a heartbeat line during torrent download polling so a hang is distinguishable from a slow download.
    def _emit_heartbeat(infohash: str, info: dict[str, Any], rate_kib: float, elapsed: float) -> None:
        """Log+print a heartbeat line so a real hang is visibly distinguishable from
        a healthy slow download."""
        dl = int(info.get("downloaded") or 0)
        total = int(info.get("size") or info.get("total_size") or 0)
        pct = (100.0 * dl / total) if total else 0.0
        peers = int(info.get("num_seeds") or 0) + int(info.get("num_leechs") or 0)
        line = (
            f"[prepare_install_source] {infohash[:8]} "
            f"dl={dl}/{total} ({pct:.1f}%) peers={peers} "
            f"rate={rate_kib:.1f}KiB/s elapsed={elapsed:.0f}s"
        )
        log.info(line)
        print(line, flush=True)
  • Helper that polls qBittorrent's torrent list until it finds a torrent matching the title, returning the infohash (used when Prowlarr doesn't supply one upfront).
    def _wait_for_infohash(qb: Qbittorrent, title: str, timeout: float) -> str:
        """Poll qBittorrent's torrent list until we find one whose name matches
        (qBittorrent doesn't echo the infohash on add)."""
        deadline = time.time() + timeout
        title_norm = title.lower()
        while time.time() < deadline:
            try:
                rs = qb._request("GET", "/api/v2/torrents/info").json()
            except Exception:
                rs = []
            for t in rs:
                if title_norm in (t.get("name", "").lower()):
                    return (t.get("hash") or "").lower()
            time.sleep(2.0)
        return ""
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the multi-step process, including polling with a configurable stall timeout (5 minutes default). It also notes that the tool mutates state. The return value is a path consumed by specific functions. However, failure modes, error handling, and permission requirements are not discussed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that packs the entire process flow without redundancy. It is front-loaded with the key action. However, the sentence is somewhat run-on and could be structured into multiple sentences for clarity. Still, it is concise and each clause adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (9 parameters, multi-step process) and absence of annotations, the description provides a high-level overview of the process and return value. It does not cover all parameters or error scenarios, but the existence of an output schema likely compensates for return value details. The description is adequate but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must explain parameters. It only mentions the configurable stall timeout and implies polling interval, but does not detail the other 7 parameters (query, filters, etc.). The parameter names are somewhat self-explanatory, but the description adds minimal semantic value beyond them.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: search Prowlarr, hand off to qBittorrent, poll for completion, classify, and return a path for downstream consumption. It differentiates from siblings (pipeline management) by specifying a concrete resource and outcome.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (preparing an install source) and mentions downstream consumers, but it does not explicitly state when to use this tool versus alternatives or provide exclusions. Sibling tools are for pipeline control, suggesting this is for setup, but no direct guidance is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/revelri/lutris-source-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server