list_installed_docsets
List all installed documentation sets in Dash. The tool returns an empty list if no docsets are installed and truncates results exceeding 25,000 tokens.
Instructions
List all installed documentation sets in Dash. An empty list is returned if the user has no docsets installed. Results are automatically truncated if they would exceed 25,000 tokens.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/dash_mcp_server/server.py:187-241 (handler)The main handler function for the 'list_installed_docsets' tool. It uses the Dash API to list installed docsets, applies a 25k token limit truncation using the estimate_tokens helper, handles errors, and returns a DocsetResults object.async def list_installed_docsets(ctx: Context) -> DocsetResults: """List all installed documentation sets in Dash. An empty list is returned if the user has no docsets installed. Results are automatically truncated if they would exceed 25,000 tokens.""" try: base_url = await working_api_base_url(ctx) if base_url is None: return DocsetResults(error="Failed to connect to Dash API Server. Please ensure Dash is running and the API server is enabled (in Dash Settings > Integration).") await ctx.debug("Fetching installed docsets from Dash API") with httpx.Client(timeout=30.0) as client: response = client.get(f"{base_url}/docsets/list") response.raise_for_status() result = response.json() docsets = result.get("docsets", []) await ctx.info(f"Found {len(docsets)} installed docsets") # Build result list with token limit checking token_limit = 25000 current_tokens = 100 # Base overhead for response structure limited_docsets = [] for docset in docsets: docset_info = DocsetResult( name=docset["name"], identifier=docset["identifier"], platform=docset["platform"], full_text_search=docset["full_text_search"], notice=docset.get("notice") ) # Estimate tokens for this docset docset_tokens = estimate_tokens(docset_info) if current_tokens + docset_tokens > token_limit: await ctx.warning(f"Token limit reached. Returning {len(limited_docsets)} of {len(docsets)} docsets to stay under 25k token limit.") break limited_docsets.append(docset_info) current_tokens += docset_tokens if len(limited_docsets) < len(docsets): await ctx.info(f"Returned {len(limited_docsets)} docsets (truncated from {len(docsets)} due to token limit)") return DocsetResults(docsets=limited_docsets) except httpx.HTTPStatusError as e: if e.response.status_code == 404: await ctx.warning("No docsets found. Install some in Settings > Downloads.") return DocsetResults(error="No docsets found. Instruct the user to install some docsets in Settings > Downloads.") return DocsetResults(error=f"HTTP error: {e}") except Exception as e: await ctx.error(f"Failed to get installed docsets: {e}") return DocsetResults(error=f"Failed to get installed docsets: {e}")
- Pydantic model defining the output schema of the list_installed_docsets tool: a list of DocsetResult objects and an optional error message.class DocsetResults(BaseModel): """Result from listing docsets.""" docsets: list[DocsetResult] = Field(description="List of installed docsets", default_factory=list) error: Optional[str] = Field(description="Error message if there was an issue", default=None)
- Pydantic model used in the output of list_installed_docsets for individual docset information.class DocsetResult(BaseModel): """Information about a docset.""" name: str = Field(description="Display name of the docset") identifier: str = Field(description="Unique identifier") platform: str = Field(description="Platform/type of the docset") full_text_search: str = Field(description="Full-text search status: 'not supported', 'disabled', 'indexing', or 'enabled'") notice: Optional[str] = Field(description="Optional notice about the docset status", default=None)
- src/dash_mcp_server/server.py:187-187 (registration)The @mcp.tool() decorator registers the list_installed_docsets function as an MCP tool with FastMCP.async def list_installed_docsets(ctx: Context) -> DocsetResults:
- Helper function used by the handler to estimate the token count of docset results for applying the 25k token truncation limit.def estimate_tokens(obj) -> int: """Estimate token count for a serialized object. Rough approximation: 1 token ≈ 4 characters.""" if isinstance(obj, str): return max(1, len(obj) // 4) elif isinstance(obj, (list, tuple)): return sum(estimate_tokens(item) for item in obj) elif isinstance(obj, dict): return sum(estimate_tokens(k) + estimate_tokens(v) for k, v in obj.items()) elif hasattr(obj, 'model_dump'): # Pydantic model return estimate_tokens(obj.model_dump()) else: return max(1, len(str(obj)) // 4)